Test Report: Docker_Linux_containerd 22230

                    
                      c636a8658fdd5cfdd18416b9a30087c97060a836:2025-12-19:42856
                    
                

Test fail (10/420)

x
+
TestFunctional/parallel/DashboardCmd (15.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-180941 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-180941 --alsologtostderr -v=1] ...
E1219 02:33:53.777270  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/addons-367973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-180941 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-180941 --alsologtostderr -v=1] stderr:
I1219 02:33:41.209203  299213 out.go:360] Setting OutFile to fd 1 ...
I1219 02:33:41.209660  299213 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:33:41.209669  299213 out.go:374] Setting ErrFile to fd 2...
I1219 02:33:41.209675  299213 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:33:41.209978  299213 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-253859/.minikube/bin
I1219 02:33:41.210381  299213 mustload.go:66] Loading cluster: functional-180941
I1219 02:33:41.210958  299213 config.go:182] Loaded profile config "functional-180941": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1219 02:33:41.211489  299213 cli_runner.go:164] Run: docker container inspect functional-180941 --format={{.State.Status}}
I1219 02:33:41.236321  299213 host.go:66] Checking if "functional-180941" exists ...
I1219 02:33:41.236730  299213 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1219 02:33:41.312560  299213 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-19 02:33:41.300701489 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1219 02:33:41.312751  299213 api_server.go:166] Checking apiserver status ...
I1219 02:33:41.312828  299213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1219 02:33:41.312904  299213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-180941
I1219 02:33:41.334011  299213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/functional-180941/id_rsa Username:docker}
I1219 02:33:41.449989  299213 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4944/cgroup
W1219 02:33:41.459958  299213 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4944/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1219 02:33:41.460022  299213 ssh_runner.go:195] Run: ls
I1219 02:33:41.464417  299213 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I1219 02:33:41.470033  299213 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W1219 02:33:41.470084  299213 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1219 02:33:41.470276  299213 config.go:182] Loaded profile config "functional-180941": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1219 02:33:41.470292  299213 addons.go:70] Setting dashboard=true in profile "functional-180941"
I1219 02:33:41.470300  299213 addons.go:239] Setting addon dashboard=true in "functional-180941"
I1219 02:33:41.470326  299213 host.go:66] Checking if "functional-180941" exists ...
I1219 02:33:41.470803  299213 cli_runner.go:164] Run: docker container inspect functional-180941 --format={{.State.Status}}
I1219 02:33:41.491873  299213 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
I1219 02:33:41.491898  299213 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
I1219 02:33:41.491991  299213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-180941
I1219 02:33:41.511817  299213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/functional-180941/id_rsa Username:docker}
I1219 02:33:41.627055  299213 ssh_runner.go:195] Run: test -f /usr/bin/helm
I1219 02:33:41.630655  299213 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
I1219 02:33:41.634645  299213 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
I1219 02:33:43.592313  299213 ssh_runner.go:235] Completed: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh": (1.957630594s)
I1219 02:33:43.592411  299213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
I1219 02:33:46.742006  299213 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.149546384s)
I1219 02:33:46.742160  299213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
I1219 02:33:46.934400  299213 addons.go:500] Verifying addon dashboard=true in "functional-180941"
I1219 02:33:46.934794  299213 cli_runner.go:164] Run: docker container inspect functional-180941 --format={{.State.Status}}
I1219 02:33:46.955611  299213 out.go:179] * Verifying dashboard addon...
I1219 02:33:46.957234  299213 kapi.go:59] client config for functional-180941: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-180941/client.crt", KeyFile:"/home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-180941/client.key", CAFile:"/home/jenkins/minikube-integration/22230-253859/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2863880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1219 02:33:46.957773  299213 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1219 02:33:46.957789  299213 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1219 02:33:46.957797  299213 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1219 02:33:46.957804  299213 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1219 02:33:46.957811  299213 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1219 02:33:46.958159  299213 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
I1219 02:33:46.966730  299213 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
I1219 02:33:46.966748  299213 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:33:47.462060  299213 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:33:47.961921  299213 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:33:48.461719  299213 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:33:48.962796  299213 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:33:49.461063  299213 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:33:49.965063  299213 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:33:50.462048  299213 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:33:50.962495  299213 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:33:51.462226  299213 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:33:51.962044  299213 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:33:52.461270  299213 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:33:52.961567  299213 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:33:53.462265  299213 kapi.go:107] duration metric: took 6.504090925s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
I1219 02:33:53.464082  299213 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-180941 addons enable metrics-server

                                                
                                                
I1219 02:33:53.465599  299213 addons.go:202] Writing out "functional-180941" config to set dashboard=true...
W1219 02:33:53.465908  299213 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1219 02:33:53.466454  299213 kapi.go:59] client config for functional-180941: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-180941/client.crt", KeyFile:"/home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-180941/client.key", CAFile:"/home/jenkins/minikube-integration/22230-253859/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2863880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1219 02:33:53.469377  299213 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard-kong-proxy  kubernetes-dashboard  921f0e42-b2f8-4174-bb8c-c5692c32573d 706 0 2025-12-19 02:33:46 +0000 UTC <nil> <nil> map[app.kubernetes.io/instance:kubernetes-dashboard app.kubernetes.io/managed-by:Helm app.kubernetes.io/name:kong app.kubernetes.io/version:3.9 enable-metrics:true helm.sh/chart:kong-2.52.0] map[meta.helm.sh/release-name:kubernetes-dashboard meta.helm.sh/release-namespace:kubernetes-dashboard] [] [] [{helm Update v1 2025-12-19 02:33:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/name":{},"f:app.kubernetes.io/version":{},"f:enable-metrics":{},"f:helm.sh/chart":{}}},"f:spec":{"f:externalTrafficPolicy":{},"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":443,\"protocol\":\"TCP\"}":{".
":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:kong-proxy-tls,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:31402,AppProtocol:nil,},},Selector:map[string]string{app.kubernetes.io/component: app,app.kubernetes.io/instance: kubernetes-dashboard,app.kubernetes.io/name: kong,},ClusterIP:10.110.21.49,Type:NodePort,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:Cluster,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.110.21.49],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
I1219 02:33:53.469557  299213 host.go:66] Checking if "functional-180941" exists ...
I1219 02:33:53.469813  299213 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-180941
I1219 02:33:53.491956  299213 kapi.go:59] client config for functional-180941: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-180941/client.crt", KeyFile:"/home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-180941/client.key", CAFile:"/home/jenkins/minikube-integration/22230-253859/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2863880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1219 02:33:53.500062  299213 warnings.go:110] "Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice"
I1219 02:33:53.504332  299213 warnings.go:110] "Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice"
I1219 02:33:53.507971  299213 warnings.go:110] "Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice"
I1219 02:33:53.511865  299213 warnings.go:110] "Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice"
I1219 02:33:53.695759  299213 warnings.go:110] "Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice"
I1219 02:33:53.764576  299213 out.go:179] * Dashboard Token:
I1219 02:33:53.765743  299213 out.go:203] eyJhbGciOiJSUzI1NiIsImtpZCI6IkdIQ2RYQnNHYlpmOXZ6dnllWm9RQThuZGJkUm1WS3ZpUHI4N2lIZlVGWUkifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzY2MTk4MDMzLCJpYXQiOjE3NjYxMTE2MzMsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiNGM4OTU1ZTMtMTRlMy00YjYyLWIxN2ItNThjNzdhMzQ5MzQ4Iiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiYWRhNmUwODEtMjI1Ny00OGY3LWFkOWQtMzljMmY2ZDI4MWY0In19LCJuYmYiOjE3NjYxMTE2MzMsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbi11c2VyIn0.wfL9uA2AWerKmwno28vMTFfBuA6xfVbRzwZCcNdJyQiFcGEEPqa9lX3L3s1wZtE4q_b-pYqYhGLlGCRMcMPRUbLKgcihWLF_UOQjLxKgtaeTEDEbW6vmIuE0c8-my2Yh2dl5TyR-lBH_ACHEbHPJr3CtMVgNFrUmyh__8IniP6e2CNIbbLm0ABPxEvaTPAezJVYY2vTImUHI7PyMzVU7ewgFwQTXKs6jyTres3RpqxpYU14pGzVGd48PHXp9Gw8LPwbZbjqsHOTUXVs_f4UX0zmevWspcwi1Kxy-vSB6XZm8c2fIQn8DQ2OIR8Nbv4Tq-oX0EYq8SLCKyoGcIX2a4w
I1219 02:33:53.766835  299213 out.go:203] https://192.168.49.2:31402
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-180941
helpers_test.go:244: (dbg) docker inspect functional-180941:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "699821475ce793e6c625880f5227e89664f4e29f8122eb0271b1539181ecf7e6",
	        "Created": "2025-12-19T02:32:01.339298956Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 288607,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-19T02:32:01.370634155Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/699821475ce793e6c625880f5227e89664f4e29f8122eb0271b1539181ecf7e6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/699821475ce793e6c625880f5227e89664f4e29f8122eb0271b1539181ecf7e6/hostname",
	        "HostsPath": "/var/lib/docker/containers/699821475ce793e6c625880f5227e89664f4e29f8122eb0271b1539181ecf7e6/hosts",
	        "LogPath": "/var/lib/docker/containers/699821475ce793e6c625880f5227e89664f4e29f8122eb0271b1539181ecf7e6/699821475ce793e6c625880f5227e89664f4e29f8122eb0271b1539181ecf7e6-json.log",
	        "Name": "/functional-180941",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-180941:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-180941",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "699821475ce793e6c625880f5227e89664f4e29f8122eb0271b1539181ecf7e6",
	                "LowerDir": "/var/lib/docker/overlay2/20d7209263a8a38c252f234112195acbf382e41774c953095e95d009678a5e45-init/diff:/var/lib/docker/overlay2/68e8325308c9e4650215fd35d4b00e1f54e6ac5929641a1bc8ed2d512448afbd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/20d7209263a8a38c252f234112195acbf382e41774c953095e95d009678a5e45/merged",
	                "UpperDir": "/var/lib/docker/overlay2/20d7209263a8a38c252f234112195acbf382e41774c953095e95d009678a5e45/diff",
	                "WorkDir": "/var/lib/docker/overlay2/20d7209263a8a38c252f234112195acbf382e41774c953095e95d009678a5e45/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-180941",
	                "Source": "/var/lib/docker/volumes/functional-180941/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-180941",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-180941",
	                "name.minikube.sigs.k8s.io": "functional-180941",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "83807b2cc2304c7bab0a19f825c7badb9a243f11bb649fd461be870ef5dee81b",
	            "SandboxKey": "/var/run/docker/netns/83807b2cc230",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-180941": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "52ec214e446330d29bad9b1e27048dbc798e4000f73ead459831b48f3a7830ec",
	                    "EndpointID": "630c627811ac293db643c1e480a06874e14f3ebd0b6fa9fdaa6add6f5ea3b93d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "a6:1b:ac:2a:b6:58",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-180941",
	                        "699821475ce7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-180941 -n functional-180941
helpers_test.go:253: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-180941 logs -n 25: (1.748738073s)
helpers_test.go:261: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                              ARGS                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-180941 ssh sudo umount -f /mount-9p                                                                                  │ functional-180941 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │                     │
	│ image   │ functional-180941 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr │ functional-180941 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ 19 Dec 25 02:33 UTC │
	│ mount   │ -p functional-180941 /tmp/TestFunctionalparallelMountCmdVerifyCleanup437824581/001:/mount3 --alsologtostderr -v=1               │ functional-180941 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │                     │
	│ ssh     │ functional-180941 ssh findmnt -T /mount1                                                                                        │ functional-180941 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │                     │
	│ mount   │ -p functional-180941 /tmp/TestFunctionalparallelMountCmdVerifyCleanup437824581/001:/mount1 --alsologtostderr -v=1               │ functional-180941 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │                     │
	│ service │ functional-180941 service list -o json                                                                                          │ functional-180941 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ 19 Dec 25 02:33 UTC │
	│ image   │ functional-180941 image ls                                                                                                      │ functional-180941 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ 19 Dec 25 02:33 UTC │
	│ image   │ functional-180941 image save --daemon kicbase/echo-server:functional-180941 --alsologtostderr                                   │ functional-180941 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ 19 Dec 25 02:33 UTC │
	│ ssh     │ functional-180941 ssh findmnt -T /mount1                                                                                        │ functional-180941 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ 19 Dec 25 02:33 UTC │
	│ ssh     │ functional-180941 ssh findmnt -T /mount2                                                                                        │ functional-180941 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ 19 Dec 25 02:33 UTC │
	│ ssh     │ functional-180941 ssh findmnt -T /mount3                                                                                        │ functional-180941 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ 19 Dec 25 02:33 UTC │
	│ mount   │ -p functional-180941 --kill=true                                                                                                │ functional-180941 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │                     │
	│ service │ functional-180941 service --namespace=default --https --url hello-node                                                          │ functional-180941 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ 19 Dec 25 02:33 UTC │
	│ ssh     │ functional-180941 ssh echo hello                                                                                                │ functional-180941 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ 19 Dec 25 02:33 UTC │
	│ ssh     │ functional-180941 ssh cat /etc/hostname                                                                                         │ functional-180941 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ 19 Dec 25 02:33 UTC │
	│ service │ functional-180941 service hello-node --url --format={{.IP}}                                                                     │ functional-180941 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ 19 Dec 25 02:33 UTC │
	│ tunnel  │ functional-180941 tunnel --alsologtostderr                                                                                      │ functional-180941 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │                     │
	│ tunnel  │ functional-180941 tunnel --alsologtostderr                                                                                      │ functional-180941 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │                     │
	│ service │ functional-180941 service hello-node --url                                                                                      │ functional-180941 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ 19 Dec 25 02:33 UTC │
	│ tunnel  │ functional-180941 tunnel --alsologtostderr                                                                                      │ functional-180941 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │                     │
	│ ssh     │ functional-180941 ssh sudo cat /etc/ssl/certs/257493.pem                                                                        │ functional-180941 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ 19 Dec 25 02:33 UTC │
	│ ssh     │ functional-180941 ssh sudo cat /usr/share/ca-certificates/257493.pem                                                            │ functional-180941 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ 19 Dec 25 02:33 UTC │
	│ ssh     │ functional-180941 ssh sudo cat /etc/ssl/certs/51391683.0                                                                        │ functional-180941 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ 19 Dec 25 02:33 UTC │
	│ ssh     │ functional-180941 ssh sudo cat /etc/ssl/certs/2574932.pem                                                                       │ functional-180941 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ 19 Dec 25 02:33 UTC │
	│ ssh     │ functional-180941 ssh sudo cat /usr/share/ca-certificates/2574932.pem                                                           │ functional-180941 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 02:33:41
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 02:33:41.191346  299204 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:33:41.191648  299204 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:33:41.191660  299204 out.go:374] Setting ErrFile to fd 2...
	I1219 02:33:41.191668  299204 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:33:41.191920  299204 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-253859/.minikube/bin
	I1219 02:33:41.192389  299204 out.go:368] Setting JSON to false
	I1219 02:33:41.193782  299204 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4560,"bootTime":1766107061,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 02:33:41.193859  299204 start.go:143] virtualization: kvm guest
	I1219 02:33:41.199236  299204 out.go:179] * [functional-180941] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 02:33:41.200720  299204 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 02:33:41.200747  299204 notify.go:221] Checking for updates...
	I1219 02:33:41.203200  299204 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 02:33:41.204606  299204 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-253859/kubeconfig
	I1219 02:33:41.205808  299204 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-253859/.minikube
	I1219 02:33:41.206791  299204 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 02:33:41.207865  299204 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 02:33:41.209608  299204 config.go:182] Loaded profile config "functional-180941": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 02:33:41.210542  299204 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 02:33:41.241530  299204 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 02:33:41.241678  299204 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 02:33:41.313040  299204 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-19 02:33:41.300701489 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 02:33:41.313183  299204 docker.go:319] overlay module found
	I1219 02:33:41.319388  299204 out.go:179] * Using the docker driver based on existing profile
	I1219 02:33:41.320225  299204 start.go:309] selected driver: docker
	I1219 02:33:41.320241  299204 start.go:928] validating driver "docker" against &{Name:functional-180941 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:functional-180941 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 02:33:41.320366  299204 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 02:33:41.320455  299204 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 02:33:41.381136  299204 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-19 02:33:41.371469379 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 02:33:41.381942  299204 cni.go:84] Creating CNI manager for ""
	I1219 02:33:41.382036  299204 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1219 02:33:41.382110  299204 start.go:353] cluster config:
	{Name:functional-180941 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:functional-180941 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 02:33:41.383682  299204 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                       ATTEMPT             POD ID              POD                                         NAMESPACE
	5e345c165d4a4       59f642f485d26       2 seconds ago        Running             kubernetes-dashboard-web   0                   c4a3939a31d77       kubernetes-dashboard-web-5c9f966b98-htfj4   kubernetes-dashboard
	01e01e1796cdd       56cc512116c8f       10 seconds ago       Exited              mount-munger               0                   452388232e3cd       busybox-mount                               default
	e5a930fb2ef9c       9056ab77afb8e       13 seconds ago       Running             echo-server                0                   e41de8f72e96e       hello-node-75c85bcc94-4z4zz                 default
	b39cd9d618557       5826b25d990d7       41 seconds ago       Running             kube-controller-manager    2                   034998a26aa20       kube-controller-manager-functional-180941   kube-system
	14ac2a6e39974       aa27095f56193       41 seconds ago       Running             kube-apiserver             0                   8909f53b69946       kube-apiserver-functional-180941            kube-system
	978c221369557       a3e246e9556e9       41 seconds ago       Running             etcd                       1                   6e2f053cb8397       etcd-functional-180941                      kube-system
	73020ed8c20ec       aec12dadf56dd       52 seconds ago       Running             kube-scheduler             1                   b59d9640c4b37       kube-scheduler-functional-180941            kube-system
	d333d965ea2cd       36eef8e07bdd6       52 seconds ago       Running             kube-proxy                 1                   cce762d72cd44       kube-proxy-j855q                            kube-system
	e7f1ab2e451e8       5826b25d990d7       52 seconds ago       Exited              kube-controller-manager    1                   034998a26aa20       kube-controller-manager-functional-180941   kube-system
	3d1f591b08793       6e38f40d628db       53 seconds ago       Running             storage-provisioner        1                   104cc4521319d       storage-provisioner                         kube-system
	d9d9e1018c454       52546a367cc9e       53 seconds ago       Running             coredns                    1                   1369879608ca1       coredns-66bc5c9577-wzv8l                    kube-system
	9026560daa2d2       4921d7a6dffa9       53 seconds ago       Running             kindnet-cni                1                   ddc66894c9955       kindnet-xh25x                               kube-system
	009e96b5d79ed       52546a367cc9e       About a minute ago   Exited              coredns                    0                   1369879608ca1       coredns-66bc5c9577-wzv8l                    kube-system
	4d325bd49d2ae       6e38f40d628db       About a minute ago   Exited              storage-provisioner        0                   104cc4521319d       storage-provisioner                         kube-system
	8eded8e1cc2de       4921d7a6dffa9       About a minute ago   Exited              kindnet-cni                0                   ddc66894c9955       kindnet-xh25x                               kube-system
	296dd104376b4       36eef8e07bdd6       About a minute ago   Exited              kube-proxy                 0                   cce762d72cd44       kube-proxy-j855q                            kube-system
	c64dcbfa4c96d       aec12dadf56dd       About a minute ago   Exited              kube-scheduler             0                   b59d9640c4b37       kube-scheduler-functional-180941            kube-system
	5fe6627431dbf       a3e246e9556e9       About a minute ago   Exited              etcd                       0                   6e2f053cb8397       etcd-functional-180941                      kube-system
	
	
	==> containerd <==
	Dec 19 02:33:52 functional-180941 containerd[3816]: time="2025-12-19T02:33:52.023231603Z" level=info msg="connecting to shim 5e345c165d4a4d17aeace7fc8bb10cffc748ada9185e633e4f5a955b3ee422f1" address="unix:///run/containerd/s/0c7484f46bb46c91f5b5b3c64c93ccf9896024852efff89965fe295f91156fbd" protocol=ttrpc version=3
	Dec 19 02:33:52 functional-180941 containerd[3816]: time="2025-12-19T02:33:52.099134821Z" level=info msg="StartContainer for \"5e345c165d4a4d17aeace7fc8bb10cffc748ada9185e633e4f5a955b3ee422f1\" returns successfully"
	Dec 19 02:33:52 functional-180941 containerd[3816]: time="2025-12-19T02:33:52.988068216Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-pod9b003897_48f4_4bfb_862f_b38aa4ba1260.slice/cri-containerd-9026560daa2d2d55434b5762a61b6fd06b3a43ed6057f7dea0b212358df2fff6.scope/hugetlb.2MB.events\""
	Dec 19 02:33:52 functional-180941 containerd[3816]: time="2025-12-19T02:33:52.988231829Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-pod9b003897_48f4_4bfb_862f_b38aa4ba1260.slice/cri-containerd-9026560daa2d2d55434b5762a61b6fd06b3a43ed6057f7dea0b212358df2fff6.scope/hugetlb.1GB.events\""
	Dec 19 02:33:52 functional-180941 containerd[3816]: time="2025-12-19T02:33:52.989195035Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode72bfb67_8e41_4d33_a35e_9300851b3b09.slice/cri-containerd-d333d965ea2cd406df4dbfc0a3009ea6fc19b4e00e9486ab18dd8d5480fc5629.scope/hugetlb.2MB.events\""
	Dec 19 02:33:52 functional-180941 containerd[3816]: time="2025-12-19T02:33:52.989309746Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode72bfb67_8e41_4d33_a35e_9300851b3b09.slice/cri-containerd-d333d965ea2cd406df4dbfc0a3009ea6fc19b4e00e9486ab18dd8d5480fc5629.scope/hugetlb.1GB.events\""
	Dec 19 02:33:52 functional-180941 containerd[3816]: time="2025-12-19T02:33:52.990122090Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd0f286b_83c9_47db_8ca8_262541006327.slice/cri-containerd-d9d9e1018c454b4fb842edfd94ce775aaf6efcbf99fd0057312823735f2de248.scope/hugetlb.2MB.events\""
	Dec 19 02:33:52 functional-180941 containerd[3816]: time="2025-12-19T02:33:52.990230320Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd0f286b_83c9_47db_8ca8_262541006327.slice/cri-containerd-d9d9e1018c454b4fb842edfd94ce775aaf6efcbf99fd0057312823735f2de248.scope/hugetlb.1GB.events\""
	Dec 19 02:33:52 functional-180941 containerd[3816]: time="2025-12-19T02:33:52.991088318Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode867e906_a63a_422b_beae_6e1d81b5654a.slice/cri-containerd-3d1f591b08793b5957d08a468189d3db4ab4eaedd80f3caadc21cc563081e0bc.scope/hugetlb.2MB.events\""
	Dec 19 02:33:52 functional-180941 containerd[3816]: time="2025-12-19T02:33:52.991209152Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode867e906_a63a_422b_beae_6e1d81b5654a.slice/cri-containerd-3d1f591b08793b5957d08a468189d3db4ab4eaedd80f3caadc21cc563081e0bc.scope/hugetlb.1GB.events\""
	Dec 19 02:33:52 functional-180941 containerd[3816]: time="2025-12-19T02:33:52.992125176Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8de06074c586be0533d71f8f54d7e57c.slice/cri-containerd-14ac2a6e3997412d1b684bbf8fc1a50a3c066b8c6e95fb58ef19669045c85531.scope/hugetlb.2MB.events\""
	Dec 19 02:33:52 functional-180941 containerd[3816]: time="2025-12-19T02:33:52.992255989Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8de06074c586be0533d71f8f54d7e57c.slice/cri-containerd-14ac2a6e3997412d1b684bbf8fc1a50a3c066b8c6e95fb58ef19669045c85531.scope/hugetlb.1GB.events\""
	Dec 19 02:33:52 functional-180941 containerd[3816]: time="2025-12-19T02:33:52.993169099Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34d10805f753972429fa4ab31a638d54.slice/cri-containerd-b39cd9d618557e29380ddd4c5b661a409cad10586915d650a33d1d82d44c9ade.scope/hugetlb.2MB.events\""
	Dec 19 02:33:52 functional-180941 containerd[3816]: time="2025-12-19T02:33:52.993287582Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34d10805f753972429fa4ab31a638d54.slice/cri-containerd-b39cd9d618557e29380ddd4c5b661a409cad10586915d650a33d1d82d44c9ade.scope/hugetlb.1GB.events\""
	Dec 19 02:33:52 functional-180941 containerd[3816]: time="2025-12-19T02:33:52.993995411Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod513d90b1_bb58_4604_96fc_4a7ff1689c05.slice/cri-containerd-e5a930fb2ef9c180226083661a45affe2306b4356ba63eae9877ce0405b085ef.scope/hugetlb.2MB.events\""
	Dec 19 02:33:52 functional-180941 containerd[3816]: time="2025-12-19T02:33:52.994097175Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod513d90b1_bb58_4604_96fc_4a7ff1689c05.slice/cri-containerd-e5a930fb2ef9c180226083661a45affe2306b4356ba63eae9877ce0405b085ef.scope/hugetlb.1GB.events\""
	Dec 19 02:33:52 functional-180941 containerd[3816]: time="2025-12-19T02:33:52.995058563Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36fe9fcf_a85d_4ccf_aac4_cf15ef4ec85f.slice/cri-containerd-5e345c165d4a4d17aeace7fc8bb10cffc748ada9185e633e4f5a955b3ee422f1.scope/hugetlb.2MB.events\""
	Dec 19 02:33:52 functional-180941 containerd[3816]: time="2025-12-19T02:33:52.995285986Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36fe9fcf_a85d_4ccf_aac4_cf15ef4ec85f.slice/cri-containerd-5e345c165d4a4d17aeace7fc8bb10cffc748ada9185e633e4f5a955b3ee422f1.scope/hugetlb.1GB.events\""
	Dec 19 02:33:52 functional-180941 containerd[3816]: time="2025-12-19T02:33:52.996926884Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podada616becfcef6d3ba653f5e99f70f60.slice/cri-containerd-978c2213695574722ba0df876a653ef754fa77c949290ea538ee2e4e2020c375.scope/hugetlb.2MB.events\""
	Dec 19 02:33:52 functional-180941 containerd[3816]: time="2025-12-19T02:33:52.997044815Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podada616becfcef6d3ba653f5e99f70f60.slice/cri-containerd-978c2213695574722ba0df876a653ef754fa77c949290ea538ee2e4e2020c375.scope/hugetlb.1GB.events\""
	Dec 19 02:33:52 functional-180941 containerd[3816]: time="2025-12-19T02:33:52.997908577Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3531edfb0a338dd5e68dd12ad922cfe3.slice/cri-containerd-73020ed8c20ec15aefab8b3b5f3654c45ab1f831a3da9d52092d960bd67f098f.scope/hugetlb.2MB.events\""
	Dec 19 02:33:52 functional-180941 containerd[3816]: time="2025-12-19T02:33:52.998021730Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3531edfb0a338dd5e68dd12ad922cfe3.slice/cri-containerd-73020ed8c20ec15aefab8b3b5f3654c45ab1f831a3da9d52092d960bd67f098f.scope/hugetlb.1GB.events\""
	Dec 19 02:33:53 functional-180941 containerd[3816]: time="2025-12-19T02:33:53.474994730Z" level=info msg="RunPodSandbox for name:\"nginx-svc\"  uid:\"e77ecfa4-b7ec-4449-9875-5ff766083fa4\"  namespace:\"default\""
	Dec 19 02:33:53 functional-180941 containerd[3816]: time="2025-12-19T02:33:53.509321659Z" level=info msg="connecting to shim b30f7a7c7edfde844f26e412444e4b0cdb4ff9ccffbefde976cae0c09bd20cce" address="unix:///run/containerd/s/8e6e02bc9f77de8cf3f6168c8feccb155ca9e56f4bf65c39f55e8d399c27757d" namespace=k8s.io protocol=ttrpc version=3
	Dec 19 02:33:53 functional-180941 containerd[3816]: time="2025-12-19T02:33:53.586561991Z" level=info msg="RunPodSandbox for name:\"nginx-svc\"  uid:\"e77ecfa4-b7ec-4449-9875-5ff766083fa4\"  namespace:\"default\" returns sandbox id \"b30f7a7c7edfde844f26e412444e4b0cdb4ff9ccffbefde976cae0c09bd20cce\""
	
	
	==> coredns [009e96b5d79eda655f895c8bbc5db174f1320debd7bb79bbd76b2d3cf491505f] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46335 - 2289 "HINFO IN 4435456827762028857.6877573451654151863. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.02952319s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d9d9e1018c454b4fb842edfd94ce775aaf6efcbf99fd0057312823735f2de248] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55503 - 485 "HINFO IN 961960786048569969.2083331202308493596. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.036526094s
	
	
	==> describe nodes <==
	Name:               functional-180941
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-180941
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=functional-180941
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T02_32_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 02:32:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-180941
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 02:33:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 02:33:46 +0000   Fri, 19 Dec 2025 02:32:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 02:33:46 +0000   Fri, 19 Dec 2025 02:32:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 02:33:46 +0000   Fri, 19 Dec 2025 02:32:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 02:33:46 +0000   Fri, 19 Dec 2025 02:32:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-180941
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                113dc0ee-fc8e-4afa-94c5-3ed969a2cce9
	  Boot ID:                    a0dec9bb-d63c-4dc5-9036-bbcaf9f2c6be
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.2.0
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (16 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-4z4zz                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         16s
	  default                     hello-node-connect-7d85dfc575-cmmxr                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         0s
	  default                     nginx-svc                                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 coredns-66bc5c9577-wzv8l                                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     93s
	  kube-system                 etcd-functional-180941                                   100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         98s
	  kube-system                 kindnet-xh25x                                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      93s
	  kube-system                 kube-apiserver-functional-180941                         250m (3%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-controller-manager-functional-180941                200m (2%)     0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 kube-proxy-j855q                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 kube-scheduler-functional-180941                         100m (1%)     0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kubernetes-dashboard        kubernetes-dashboard-api-c5cfcc8f7-d8jxv                 100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     9s
	  kubernetes-dashboard        kubernetes-dashboard-auth-57b44fc5d4-glcd2               100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     9s
	  kubernetes-dashboard        kubernetes-dashboard-kong-9849c64bd-m8mlk                0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-7685fd8b77-8w988    100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     9s
	  kubernetes-dashboard        kubernetes-dashboard-web-5c9f966b98-htfj4                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1250m (15%)  1100m (13%)
	  memory             1020Mi (3%)  1820Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 91s                kube-proxy       
	  Normal  Starting                 37s                kube-proxy       
	  Normal  NodeHasSufficientPID     98s                kubelet          Node functional-180941 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  98s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  98s                kubelet          Node functional-180941 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    98s                kubelet          Node functional-180941 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 98s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           94s                node-controller  Node functional-180941 event: Registered Node functional-180941 in Controller
	  Normal  NodeReady                79s                kubelet          Node functional-180941 status is now: NodeReady
	  Normal  Starting                 43s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  43s (x8 over 43s)  kubelet          Node functional-180941 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s (x8 over 43s)  kubelet          Node functional-180941 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s (x7 over 43s)  kubelet          Node functional-180941 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  43s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           37s                node-controller  Node functional-180941 event: Registered Node functional-180941 in Controller
	
	
	==> dmesg <==
	[Dec19 01:17] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001886] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.395482] i8042: Warning: Keylock active
	[  +0.012710] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497460] block sda: the capability attribute has been deprecated.
	[  +0.080392] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.020963] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.499240] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [5fe6627431dbf2438de339cd3c2f774be8680c2c817b42ff82b17eb12ce2d65b] <==
	{"level":"warn","ts":"2025-12-19T02:32:14.151098Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:32:14.158003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:32:14.166559Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:32:14.178756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:32:14.185878Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:32:14.193063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:32:14.250864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56418","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-19T02:33:11.604834Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-19T02:33:11.604940Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-180941","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-12-19T02:33:11.605065Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-19T02:33:11.606630Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-19T02:33:11.606712Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-19T02:33:11.606751Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-12-19T02:33:11.606798Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-19T02:33:11.606828Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-19T02:33:11.606871Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-19T02:33:11.606895Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-19T02:33:11.606902Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-19T02:33:11.606908Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-12-19T02:33:11.606912Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-19T02:33:11.606858Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-19T02:33:11.608378Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-12-19T02:33:11.608435Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-19T02:33:11.608465Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-12-19T02:33:11.608474Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-180941","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [978c2213695574722ba0df876a653ef754fa77c949290ea538ee2e4e2020c375] <==
	{"level":"warn","ts":"2025-12-19T02:33:14.625000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:33:14.633685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:33:14.640444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:33:14.655216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:33:14.662012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:33:14.669786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:33:14.676874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:33:14.684553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:33:14.691653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:33:14.700155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:33:14.707182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:33:14.714730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:33:14.731630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:33:14.738214Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:33:14.744739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:33:14.792248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:33:48.657194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:33:48.670015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:33:48.679943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:33:48.716088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:33:48.722948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:33:48.733347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:33:48.751324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:33:48.762683Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:33:48.802509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60696","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 02:33:55 up  1:16,  0 user,  load average: 1.62, 13.61, 55.28
	Linux functional-180941 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8eded8e1cc2ded02f545d869951b55d5921f8ca55217b36f3ea6b1112c3fd2f1] <==
	I1219 02:32:25.988530       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1219 02:32:25.988848       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1219 02:32:25.988981       1 main.go:148] setting mtu 1500 for CNI 
	I1219 02:32:25.988997       1 main.go:178] kindnetd IP family: "ipv4"
	I1219 02:32:25.989030       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-19T02:32:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1219 02:32:26.236045       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1219 02:32:26.236075       1 controller.go:381] "Waiting for informer caches to sync"
	I1219 02:32:26.236087       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1219 02:32:26.236230       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1219 02:32:26.636205       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1219 02:32:26.636241       1 metrics.go:72] Registering metrics
	I1219 02:32:26.636310       1 controller.go:711] "Syncing nftables rules"
	I1219 02:32:36.236766       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:32:36.236834       1 main.go:301] handling current node
	I1219 02:32:46.236226       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:32:46.236279       1 main.go:301] handling current node
	I1219 02:32:56.236370       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:32:56.236409       1 main.go:301] handling current node
	
	
	==> kindnet [9026560daa2d2d55434b5762a61b6fd06b3a43ed6057f7dea0b212358df2fff6] <==
	E1219 02:33:01.986953       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1219 02:33:01.987169       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1219 02:33:02.814881       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1219 02:33:02.831912       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1219 02:33:03.234880       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1219 02:33:03.280866       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1219 02:33:05.817768       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1219 02:33:05.831653       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1219 02:33:05.930770       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1219 02:33:06.422982       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1219 02:33:10.438335       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1219 02:33:10.765918       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1219 02:33:11.416505       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1219 02:33:11.710208       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1219 02:33:21.890870       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:33:21.890956       1 main.go:301] handling current node
	I1219 02:33:24.191701       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1219 02:33:24.191731       1 metrics.go:72] Registering metrics
	I1219 02:33:24.191811       1 controller.go:711] "Syncing nftables rules"
	I1219 02:33:31.890727       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:33:31.890769       1 main.go:301] handling current node
	I1219 02:33:41.890894       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:33:41.890941       1 main.go:301] handling current node
	I1219 02:33:51.890911       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:33:51.891041       1 main.go:301] handling current node
	
	
	==> kube-apiserver [14ac2a6e3997412d1b684bbf8fc1a50a3c066b8c6e95fb58ef19669045c85531] <==
	I1219 02:33:44.229221       1 handler.go:285] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	I1219 02:33:44.236571       1 handler.go:285] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	I1219 02:33:44.243509       1 handler.go:285] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
	I1219 02:33:44.260035       1 handler.go:285] Adding GroupVersion configuration.konghq.com v1beta1 to ResourceManager
	I1219 02:33:44.268026       1 handler.go:285] Adding GroupVersion configuration.konghq.com v1beta1 to ResourceManager
	I1219 02:33:46.639431       1 controller.go:667] quota admission added evaluator for: namespaces
	I1219 02:33:46.686600       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-kong-proxy" clusterIPs={"IPv4":"10.110.21.49"}
	I1219 02:33:46.689680       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-api" clusterIPs={"IPv4":"10.109.40.251"}
	I1219 02:33:46.694860       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-auth" clusterIPs={"IPv4":"10.96.19.203"}
	I1219 02:33:46.697189       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.11.5"}
	I1219 02:33:46.701691       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-web" clusterIPs={"IPv4":"10.98.156.91"}
	W1219 02:33:48.645680       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:33:48.657041       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:33:48.669978       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1219 02:33:48.679889       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:33:48.706699       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1219 02:33:48.722841       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:33:48.733290       1 logging.go:55] [core] [Channel #283 SubChannel #284]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:33:48.751242       1 logging.go:55] [core] [Channel #287 SubChannel #288]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:33:48.762665       1 logging.go:55] [core] [Channel #291 SubChannel #292]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:33:48.775206       1 logging.go:55] [core] [Channel #295 SubChannel #296]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:33:48.792632       1 logging.go:55] [core] [Channel #299 SubChannel #300]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:33:48.802411       1 logging.go:55] [core] [Channel #303 SubChannel #304]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1219 02:33:53.165796       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.107.116.109"}
	I1219 02:33:55.192190       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.104.100.243"}
	
	
	==> kube-controller-manager [b39cd9d618557e29380ddd4c5b661a409cad10586915d650a33d1d82d44c9ade] <==
	I1219 02:33:18.625590       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1219 02:33:18.626221       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1219 02:33:18.626848       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1219 02:33:18.627200       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1219 02:33:18.627478       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1219 02:33:18.629346       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 02:33:18.629997       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1219 02:33:18.630096       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1219 02:33:18.633733       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1219 02:33:18.637003       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1219 02:33:18.639282       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1219 02:33:18.639293       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 02:33:48.634202       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="kongingresses.configuration.konghq.com"
	I1219 02:33:48.634257       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="kongupstreampolicies.configuration.konghq.com"
	I1219 02:33:48.634290       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="udpingresses.configuration.konghq.com"
	I1219 02:33:48.634320       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="tcpingresses.configuration.konghq.com"
	I1219 02:33:48.634354       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="kongplugins.configuration.konghq.com"
	I1219 02:33:48.634391       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingressclassparameterses.configuration.konghq.com"
	I1219 02:33:48.634447       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="kongconsumers.configuration.konghq.com"
	I1219 02:33:48.634489       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="kongconsumergroups.configuration.konghq.com"
	I1219 02:33:48.634531       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="kongcustomentities.configuration.konghq.com"
	I1219 02:33:48.634650       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1219 02:33:48.648719       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1219 02:33:49.835186       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 02:33:49.849392       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [e7f1ab2e451e8d1832587d91da1a66105cf4f700f68c5bbdb245d0ce51ecacaa] <==
	I1219 02:33:03.054880       1 serving.go:386] Generated self-signed cert in-memory
	I1219 02:33:03.434988       1 controllermanager.go:191] "Starting" version="v1.34.3"
	I1219 02:33:03.435012       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 02:33:03.436533       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1219 02:33:03.436542       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1219 02:33:03.436936       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1219 02:33:03.437057       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1219 02:33:13.438558       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-proxy [296dd104376b46a04a176ca9b4b06cd8abeba8ff9549f9aa36379e1e289910c0] <==
	I1219 02:32:23.486269       1 server_linux.go:53] "Using iptables proxy"
	I1219 02:32:23.549556       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1219 02:32:23.650660       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1219 02:32:23.650718       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1219 02:32:23.650847       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 02:32:23.672863       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1219 02:32:23.672935       1 server_linux.go:132] "Using iptables Proxier"
	I1219 02:32:23.678176       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 02:32:23.678611       1 server.go:527] "Version info" version="v1.34.3"
	I1219 02:32:23.678651       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 02:32:23.680736       1 config.go:200] "Starting service config controller"
	I1219 02:32:23.680759       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 02:32:23.680773       1 config.go:309] "Starting node config controller"
	I1219 02:32:23.680778       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 02:32:23.680786       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 02:32:23.680800       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 02:32:23.680806       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 02:32:23.680826       1 config.go:106] "Starting endpoint slice config controller"
	I1219 02:32:23.680878       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 02:32:23.781924       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 02:32:23.781942       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1219 02:32:23.781966       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [d333d965ea2cd406df4dbfc0a3009ea6fc19b4e00e9486ab18dd8d5480fc5629] <==
	I1219 02:33:02.533044       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1219 02:33:02.534219       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-180941&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1219 02:33:03.392134       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-180941&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1219 02:33:06.289967       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-180941&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1219 02:33:10.699791       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-180941&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1219 02:33:18.133252       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1219 02:33:18.133291       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1219 02:33:18.133368       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 02:33:18.155887       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1219 02:33:18.155974       1 server_linux.go:132] "Using iptables Proxier"
	I1219 02:33:18.162246       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 02:33:18.162672       1 server.go:527] "Version info" version="v1.34.3"
	I1219 02:33:18.162705       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 02:33:18.164047       1 config.go:106] "Starting endpoint slice config controller"
	I1219 02:33:18.164079       1 config.go:200] "Starting service config controller"
	I1219 02:33:18.164095       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 02:33:18.164088       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 02:33:18.164114       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 02:33:18.164113       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 02:33:18.164147       1 config.go:309] "Starting node config controller"
	I1219 02:33:18.164155       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 02:33:18.264302       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 02:33:18.264326       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 02:33:18.264363       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1219 02:33:18.264380       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [73020ed8c20ec15aefab8b3b5f3654c45ab1f831a3da9d52092d960bd67f098f] <==
	E1219 02:33:07.464067       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1219 02:33:07.567061       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1219 02:33:07.638842       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1219 02:33:07.717843       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1219 02:33:08.102921       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1219 02:33:10.137353       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1219 02:33:10.619167       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1219 02:33:10.707146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1219 02:33:10.781682       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1219 02:33:10.863688       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1219 02:33:11.244900       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1219 02:33:11.343646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1219 02:33:11.596275       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1219 02:33:11.775282       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1219 02:33:12.403636       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1219 02:33:12.434362       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1219 02:33:12.468170       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1219 02:33:12.479593       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1219 02:33:12.588547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1219 02:33:12.745806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1219 02:33:13.509021       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1219 02:33:15.188848       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1219 02:33:15.202705       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1219 02:33:15.202825       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I1219 02:33:27.503401       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [c64dcbfa4c96d0561fc7d99addb1ea41361024efacc0f7dcae1170ed6e33d2f3] <==
	E1219 02:32:14.945776       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1219 02:32:14.945797       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1219 02:32:14.945872       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1219 02:32:14.945954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1219 02:32:14.945993       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1219 02:32:14.946011       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1219 02:32:14.946050       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1219 02:32:14.946082       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1219 02:32:14.946108       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1219 02:32:14.946161       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1219 02:32:14.946158       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1219 02:32:14.946314       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1219 02:32:14.946688       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1219 02:32:15.782945       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1219 02:32:15.794772       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1219 02:32:15.812099       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1219 02:32:15.817010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1219 02:32:15.823045       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1219 02:32:16.543347       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 02:33:01.456664       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 02:33:01.456756       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1219 02:33:01.456709       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1219 02:33:01.457000       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1219 02:33:01.457018       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1219 02:33:01.457043       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 19 02:33:41 functional-180941 kubelet[4795]: I1219 02:33:41.966730    4795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7rm6\" (UniqueName: \"kubernetes.io/projected/e428a017-930f-45d6-91a4-41044c3cd159-kube-api-access-v7rm6\") pod \"busybox-mount\" (UID: \"e428a017-930f-45d6-91a4-41044c3cd159\") " pod="default/busybox-mount"
	Dec 19 02:33:45 functional-180941 kubelet[4795]: I1219 02:33:45.017506    4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-node-75c85bcc94-4z4zz" podStartSLOduration=3.935733318 podStartE2EDuration="6.017481771s" podCreationTimestamp="2025-12-19 02:33:39 +0000 UTC" firstStartedPulling="2025-12-19 02:33:39.551338867 +0000 UTC m=+26.762353014" lastFinishedPulling="2025-12-19 02:33:41.633087322 +0000 UTC m=+28.844101467" observedRunningTime="2025-12-19 02:33:42.008029654 +0000 UTC m=+29.219043832" watchObservedRunningTime="2025-12-19 02:33:45.017481771 +0000 UTC m=+32.228495931"
	Dec 19 02:33:46 functional-180941 kubelet[4795]: I1219 02:33:46.193401    4795 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7rm6\" (UniqueName: \"kubernetes.io/projected/e428a017-930f-45d6-91a4-41044c3cd159-kube-api-access-v7rm6\") pod \"e428a017-930f-45d6-91a4-41044c3cd159\" (UID: \"e428a017-930f-45d6-91a4-41044c3cd159\") "
	Dec 19 02:33:46 functional-180941 kubelet[4795]: I1219 02:33:46.193468    4795 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/e428a017-930f-45d6-91a4-41044c3cd159-test-volume\") pod \"e428a017-930f-45d6-91a4-41044c3cd159\" (UID: \"e428a017-930f-45d6-91a4-41044c3cd159\") "
	Dec 19 02:33:46 functional-180941 kubelet[4795]: I1219 02:33:46.193574    4795 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e428a017-930f-45d6-91a4-41044c3cd159-test-volume" (OuterVolumeSpecName: "test-volume") pod "e428a017-930f-45d6-91a4-41044c3cd159" (UID: "e428a017-930f-45d6-91a4-41044c3cd159"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 19 02:33:46 functional-180941 kubelet[4795]: I1219 02:33:46.196186    4795 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e428a017-930f-45d6-91a4-41044c3cd159-kube-api-access-v7rm6" (OuterVolumeSpecName: "kube-api-access-v7rm6") pod "e428a017-930f-45d6-91a4-41044c3cd159" (UID: "e428a017-930f-45d6-91a4-41044c3cd159"). InnerVolumeSpecName "kube-api-access-v7rm6". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 19 02:33:46 functional-180941 kubelet[4795]: I1219 02:33:46.294115    4795 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v7rm6\" (UniqueName: \"kubernetes.io/projected/e428a017-930f-45d6-91a4-41044c3cd159-kube-api-access-v7rm6\") on node \"functional-180941\" DevicePath \"\""
	Dec 19 02:33:46 functional-180941 kubelet[4795]: I1219 02:33:46.294160    4795 reconciler_common.go:299] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/e428a017-930f-45d6-91a4-41044c3cd159-test-volume\") on node \"functional-180941\" DevicePath \"\""
	Dec 19 02:33:46 functional-180941 kubelet[4795]: I1219 02:33:46.898983    4795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubernetes-dashboard-kong-prefix-dir\" (UniqueName: \"kubernetes.io/empty-dir/1e3265b8-b920-4d8d-88ce-d1a5577ec92a-kubernetes-dashboard-kong-prefix-dir\") pod \"kubernetes-dashboard-kong-9849c64bd-m8mlk\" (UID: \"1e3265b8-b920-4d8d-88ce-d1a5577ec92a\") " pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-m8mlk"
	Dec 19 02:33:46 functional-180941 kubelet[4795]: I1219 02:33:46.899070    4795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubernetes-dashboard-kong-tmp\" (UniqueName: \"kubernetes.io/empty-dir/1e3265b8-b920-4d8d-88ce-d1a5577ec92a-kubernetes-dashboard-kong-tmp\") pod \"kubernetes-dashboard-kong-9849c64bd-m8mlk\" (UID: \"1e3265b8-b920-4d8d-88ce-d1a5577ec92a\") " pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-m8mlk"
	Dec 19 02:33:46 functional-180941 kubelet[4795]: I1219 02:33:46.899154    4795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d85d\" (UniqueName: \"kubernetes.io/projected/f58425c1-30d8-451d-95e9-e5baa914d266-kube-api-access-4d85d\") pod \"kubernetes-dashboard-metrics-scraper-7685fd8b77-8w988\" (UID: \"f58425c1-30d8-451d-95e9-e5baa914d266\") " pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-8w988"
	Dec 19 02:33:46 functional-180941 kubelet[4795]: I1219 02:33:46.899197    4795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/67d78b53-5a1c-42b7-8a19-5afbac88f345-tmp-volume\") pod \"kubernetes-dashboard-api-c5cfcc8f7-d8jxv\" (UID: \"67d78b53-5a1c-42b7-8a19-5afbac88f345\") " pod="kubernetes-dashboard/kubernetes-dashboard-api-c5cfcc8f7-d8jxv"
	Dec 19 02:33:46 functional-180941 kubelet[4795]: I1219 02:33:46.899225    4795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/98f6c7f8-cead-462c-8433-d56e7b95a6bf-tmp-volume\") pod \"kubernetes-dashboard-auth-57b44fc5d4-glcd2\" (UID: \"98f6c7f8-cead-462c-8433-d56e7b95a6bf\") " pod="kubernetes-dashboard/kubernetes-dashboard-auth-57b44fc5d4-glcd2"
	Dec 19 02:33:46 functional-180941 kubelet[4795]: I1219 02:33:46.899242    4795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjbw9\" (UniqueName: \"kubernetes.io/projected/98f6c7f8-cead-462c-8433-d56e7b95a6bf-kube-api-access-pjbw9\") pod \"kubernetes-dashboard-auth-57b44fc5d4-glcd2\" (UID: \"98f6c7f8-cead-462c-8433-d56e7b95a6bf\") " pod="kubernetes-dashboard/kubernetes-dashboard-auth-57b44fc5d4-glcd2"
	Dec 19 02:33:46 functional-180941 kubelet[4795]: I1219 02:33:46.899256    4795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrt59\" (UniqueName: \"kubernetes.io/projected/36fe9fcf-a85d-4ccf-aac4-cf15ef4ec85f-kube-api-access-mrt59\") pod \"kubernetes-dashboard-web-5c9f966b98-htfj4\" (UID: \"36fe9fcf-a85d-4ccf-aac4-cf15ef4ec85f\") " pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-htfj4"
	Dec 19 02:33:46 functional-180941 kubelet[4795]: I1219 02:33:46.899271    4795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kong-custom-dbless-config-volume\" (UniqueName: \"kubernetes.io/configmap/1e3265b8-b920-4d8d-88ce-d1a5577ec92a-kong-custom-dbless-config-volume\") pod \"kubernetes-dashboard-kong-9849c64bd-m8mlk\" (UID: \"1e3265b8-b920-4d8d-88ce-d1a5577ec92a\") " pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-m8mlk"
	Dec 19 02:33:46 functional-180941 kubelet[4795]: I1219 02:33:46.899287    4795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9dsz\" (UniqueName: \"kubernetes.io/projected/67d78b53-5a1c-42b7-8a19-5afbac88f345-kube-api-access-d9dsz\") pod \"kubernetes-dashboard-api-c5cfcc8f7-d8jxv\" (UID: \"67d78b53-5a1c-42b7-8a19-5afbac88f345\") " pod="kubernetes-dashboard/kubernetes-dashboard-api-c5cfcc8f7-d8jxv"
	Dec 19 02:33:46 functional-180941 kubelet[4795]: I1219 02:33:46.899300    4795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/36fe9fcf-a85d-4ccf-aac4-cf15ef4ec85f-tmp-volume\") pod \"kubernetes-dashboard-web-5c9f966b98-htfj4\" (UID: \"36fe9fcf-a85d-4ccf-aac4-cf15ef4ec85f\") " pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-htfj4"
	Dec 19 02:33:46 functional-180941 kubelet[4795]: I1219 02:33:46.899348    4795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f58425c1-30d8-451d-95e9-e5baa914d266-tmp-volume\") pod \"kubernetes-dashboard-metrics-scraper-7685fd8b77-8w988\" (UID: \"f58425c1-30d8-451d-95e9-e5baa914d266\") " pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-8w988"
	Dec 19 02:33:47 functional-180941 kubelet[4795]: I1219 02:33:47.015232    4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="452388232e3cdb22d0e0a662cfc2478bcede2e29c5a8aa6d7cdd9ede97b8b9e7"
	Dec 19 02:33:51 functional-180941 kubelet[4795]: I1219 02:33:51.990999    4795 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863352Ki","pods":"110"}
	Dec 19 02:33:51 functional-180941 kubelet[4795]: I1219 02:33:51.991101    4795 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863352Ki","pods":"110"}
	Dec 19 02:33:53 functional-180941 kubelet[4795]: I1219 02:33:53.159063    4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-htfj4" podStartSLOduration=2.38067891 podStartE2EDuration="7.159036419s" podCreationTimestamp="2025-12-19 02:33:46 +0000 UTC" firstStartedPulling="2025-12-19 02:33:47.212389566 +0000 UTC m=+34.423403716" lastFinishedPulling="2025-12-19 02:33:51.990747076 +0000 UTC m=+39.201761225" observedRunningTime="2025-12-19 02:33:53.063647425 +0000 UTC m=+40.274661599" watchObservedRunningTime="2025-12-19 02:33:53.159036419 +0000 UTC m=+40.370050576"
	Dec 19 02:33:53 functional-180941 kubelet[4795]: I1219 02:33:53.242818    4795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69dzt\" (UniqueName: \"kubernetes.io/projected/e77ecfa4-b7ec-4449-9875-5ff766083fa4-kube-api-access-69dzt\") pod \"nginx-svc\" (UID: \"e77ecfa4-b7ec-4449-9875-5ff766083fa4\") " pod="default/nginx-svc"
	Dec 19 02:33:55 functional-180941 kubelet[4795]: I1219 02:33:55.257016    4795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv7gz\" (UniqueName: \"kubernetes.io/projected/f14500b1-7be9-4e64-ba7b-b824b9faef10-kube-api-access-sv7gz\") pod \"hello-node-connect-7d85dfc575-cmmxr\" (UID: \"f14500b1-7be9-4e64-ba7b-b824b9faef10\") " pod="default/hello-node-connect-7d85dfc575-cmmxr"
	
	
	==> kubernetes-dashboard [5e345c165d4a4d17aeace7fc8bb10cffc748ada9185e633e4f5a955b3ee422f1] <==
	I1219 02:33:52.175785       1 main.go:37] "Starting Kubernetes Dashboard Web" version="1.7.0"
	I1219 02:33:52.175855       1 init.go:48] Using in-cluster config
	I1219 02:33:52.176125       1 main.go:57] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> storage-provisioner [3d1f591b08793b5957d08a468189d3db4ab4eaedd80f3caadc21cc563081e0bc] <==
	W1219 02:33:30.860526       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:33:32.863699       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:33:32.868447       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:33:34.871997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:33:34.878290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:33:36.881069       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:33:36.886009       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:33:38.889528       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:33:38.893935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:33:40.897482       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:33:40.903706       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:33:42.906996       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:33:42.910991       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:33:44.913833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:33:44.917916       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:33:46.923458       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:33:46.928113       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:33:48.931681       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:33:48.936480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:33:50.940547       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:33:50.947760       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:33:52.951401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:33:52.955428       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:33:54.959461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:33:54.964315       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [4d325bd49d2ae8ad9b2605e9bd9667579857876c54bc9b7f71034e1b896677a4] <==
	I1219 02:32:36.959370       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-180941_ff5ea45f-7c5f-49a0-a6c1-a5aef2d7867c!
	W1219 02:32:38.868430       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:32:38.872393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:32:40.879172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:32:40.884348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:32:42.888523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:32:42.893943       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:32:44.897326       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:32:44.902001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:32:46.905734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:32:46.910287       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:32:48.913330       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:32:48.919094       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:32:50.922852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:32:50.927472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:32:52.931112       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:32:52.935264       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:32:54.938285       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:32:54.944139       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:32:56.947802       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:32:56.951695       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:32:58.954942       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:32:58.959120       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:33:00.962878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:33:00.968772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-180941 -n functional-180941
helpers_test.go:270: (dbg) Run:  kubectl --context functional-180941 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: busybox-mount hello-node-connect-7d85dfc575-cmmxr nginx-svc sp-pod kubernetes-dashboard-api-c5cfcc8f7-d8jxv kubernetes-dashboard-auth-57b44fc5d4-glcd2 kubernetes-dashboard-kong-9849c64bd-m8mlk kubernetes-dashboard-metrics-scraper-7685fd8b77-8w988
helpers_test.go:283: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context functional-180941 describe pod busybox-mount hello-node-connect-7d85dfc575-cmmxr nginx-svc sp-pod kubernetes-dashboard-api-c5cfcc8f7-d8jxv kubernetes-dashboard-auth-57b44fc5d4-glcd2 kubernetes-dashboard-kong-9849c64bd-m8mlk kubernetes-dashboard-metrics-scraper-7685fd8b77-8w988
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context functional-180941 describe pod busybox-mount hello-node-connect-7d85dfc575-cmmxr nginx-svc sp-pod kubernetes-dashboard-api-c5cfcc8f7-d8jxv kubernetes-dashboard-auth-57b44fc5d4-glcd2 kubernetes-dashboard-kong-9849c64bd-m8mlk kubernetes-dashboard-metrics-scraper-7685fd8b77-8w988: exit status 1 (109.900604ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-180941/192.168.49.2
	Start Time:       Fri, 19 Dec 2025 02:33:41 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  mount-munger:
	    Container ID:  containerd://01e01e1796cdd5d38f245adb949d3c722b1407a97c69d13b52526d758d15d83e
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 19 Dec 2025 02:33:44 +0000
	      Finished:     Fri, 19 Dec 2025 02:33:44 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-v7rm6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-v7rm6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  15s   default-scheduler  Successfully assigned default/busybox-mount to functional-180941
	  Normal  Pulling    14s   kubelet            spec.containers{mount-munger}: Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     12s   kubelet            spec.containers{mount-munger}: Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.337s (2.337s including waiting). Image size: 2395207 bytes.
	  Normal  Created    12s   kubelet            spec.containers{mount-munger}: Created container: mount-munger
	  Normal  Started    12s   kubelet            spec.containers{mount-munger}: Started container mount-munger
	
	
	Name:             hello-node-connect-7d85dfc575-cmmxr
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-180941/192.168.49.2
	Start Time:       Fri, 19 Dec 2025 02:33:55 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sv7gz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-sv7gz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  1s    default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-cmmxr to functional-180941
	  Normal  Pulling    1s    kubelet            spec.containers{echo-server}: Pulling image "kicbase/echo-server"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-180941/192.168.49.2
	Start Time:       Fri, 19 Dec 2025 02:33:53 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  nginx:
	    Container ID:   
	    Image:          public.ecr.aws/nginx/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-69dzt (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-69dzt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/nginx-svc to functional-180941
	  Normal  Pulling    3s    kubelet            spec.containers{nginx}: Pulling image "public.ecr.aws/nginx/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-180941/192.168.49.2
	Start Time:       Fri, 19 Dec 2025 02:33:56 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          public.ecr.aws/nginx/nginx:alpine
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pnlff (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-pnlff:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  0s    default-scheduler  Successfully assigned default/sp-pod to functional-180941

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "kubernetes-dashboard-api-c5cfcc8f7-d8jxv" not found
	Error from server (NotFound): pods "kubernetes-dashboard-auth-57b44fc5d4-glcd2" not found
	Error from server (NotFound): pods "kubernetes-dashboard-kong-9849c64bd-m8mlk" not found
	Error from server (NotFound): pods "kubernetes-dashboard-metrics-scraper-7685fd8b77-8w988" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context functional-180941 describe pod busybox-mount hello-node-connect-7d85dfc575-cmmxr nginx-svc sp-pod kubernetes-dashboard-api-c5cfcc8f7-d8jxv kubernetes-dashboard-auth-57b44fc5d4-glcd2 kubernetes-dashboard-kong-9849c64bd-m8mlk kubernetes-dashboard-metrics-scraper-7685fd8b77-8w988: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (15.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (26.71s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-453239 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-453239 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-453239 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-453239 --alsologtostderr -v=1] stderr:
I1219 02:36:45.257828  327111 out.go:360] Setting OutFile to fd 1 ...
I1219 02:36:45.258077  327111 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:36:45.258086  327111 out.go:374] Setting ErrFile to fd 2...
I1219 02:36:45.258090  327111 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:36:45.258305  327111 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-253859/.minikube/bin
I1219 02:36:45.258534  327111 mustload.go:66] Loading cluster: functional-453239
I1219 02:36:45.258942  327111 config.go:182] Loaded profile config "functional-453239": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1219 02:36:45.259323  327111 cli_runner.go:164] Run: docker container inspect functional-453239 --format={{.State.Status}}
I1219 02:36:45.279847  327111 host.go:66] Checking if "functional-453239" exists ...
I1219 02:36:45.280270  327111 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1219 02:36:45.344469  327111 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-19 02:36:45.334782522 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1219 02:36:45.344628  327111 api_server.go:166] Checking apiserver status ...
I1219 02:36:45.344677  327111 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1219 02:36:45.344711  327111 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-453239
I1219 02:36:45.363226  327111 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/functional-453239/id_rsa Username:docker}
I1219 02:36:45.473967  327111 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5010/cgroup
W1219 02:36:45.483835  327111 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5010/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1219 02:36:45.483892  327111 ssh_runner.go:195] Run: ls
I1219 02:36:45.488272  327111 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I1219 02:36:45.492771  327111 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W1219 02:36:45.492830  327111 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1219 02:36:45.493009  327111 config.go:182] Loaded profile config "functional-453239": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1219 02:36:45.493027  327111 addons.go:70] Setting dashboard=true in profile "functional-453239"
I1219 02:36:45.493038  327111 addons.go:239] Setting addon dashboard=true in "functional-453239"
I1219 02:36:45.493073  327111 host.go:66] Checking if "functional-453239" exists ...
I1219 02:36:45.493395  327111 cli_runner.go:164] Run: docker container inspect functional-453239 --format={{.State.Status}}
I1219 02:36:45.512191  327111 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
I1219 02:36:45.512228  327111 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
I1219 02:36:45.512335  327111 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-453239
I1219 02:36:45.531595  327111 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/functional-453239/id_rsa Username:docker}
I1219 02:36:45.642712  327111 ssh_runner.go:195] Run: test -f /usr/bin/helm
I1219 02:36:45.646401  327111 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
I1219 02:36:45.649772  327111 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
I1219 02:36:46.821000  327111 ssh_runner.go:235] Completed: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh": (1.171185696s)
I1219 02:36:46.821143  327111 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
I1219 02:36:49.997029  327111 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.175833714s)
I1219 02:36:49.997127  327111 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
I1219 02:36:50.202678  327111 addons.go:500] Verifying addon dashboard=true in "functional-453239"
I1219 02:36:50.202995  327111 cli_runner.go:164] Run: docker container inspect functional-453239 --format={{.State.Status}}
I1219 02:36:50.223317  327111 out.go:179] * Verifying dashboard addon...
I1219 02:36:50.225220  327111 kapi.go:59] client config for functional-453239: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-453239/client.crt", KeyFile:"/home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-453239/client.key", CAFile:"/home/jenkins/minikube-integration/22230-253859/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2863880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1219 02:36:50.225740  327111 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1219 02:36:50.225760  327111 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1219 02:36:50.225768  327111 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1219 02:36:50.225778  327111 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1219 02:36:50.225787  327111 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1219 02:36:50.226220  327111 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
I1219 02:36:50.235179  327111 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
I1219 02:36:50.235208  327111 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:50.729775  327111 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:51.229515  327111 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:51.730601  327111 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:52.230200  327111 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:52.730296  327111 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:53.230812  327111 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:53.730152  327111 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:54.229957  327111 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:54.729745  327111 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:55.229725  327111 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:55.731045  327111 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:56.230620  327111 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:56.730246  327111 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:57.229908  327111 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:57.730256  327111 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:58.230511  327111 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:58.730121  327111 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:59.230118  327111 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:36:59.729808  327111 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:00.230477  327111 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:00.730736  327111 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:01.229545  327111 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:01.730523  327111 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:02.230877  327111 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:02.729834  327111 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:03.229776  327111 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:03.729717  327111 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:04.296053  327111 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:04.729762  327111 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:05.229760  327111 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:05.729959  327111 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:06.230005  327111 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:06.729898  327111 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:07.229904  327111 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:07.729776  327111 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:08.229611  327111 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:08.730306  327111 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:09.230643  327111 kapi.go:107] duration metric: took 19.004424806s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
I1219 02:37:09.232149  327111 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-453239 addons enable metrics-server

                                                
                                                
I1219 02:37:09.233252  327111 addons.go:202] Writing out "functional-453239" config to set dashboard=true...
W1219 02:37:09.233496  327111 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1219 02:37:09.233943  327111 kapi.go:59] client config for functional-453239: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-453239/client.crt", KeyFile:"/home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-453239/client.key", CAFile:"/home/jenkins/minikube-integration/22230-253859/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2863880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1219 02:37:09.236530  327111 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard-kong-proxy  kubernetes-dashboard  045009b2-f25d-41c6-b032-ecea88fd4e09 833 0 2025-12-19 02:36:49 +0000 UTC <nil> <nil> map[app.kubernetes.io/instance:kubernetes-dashboard app.kubernetes.io/managed-by:Helm app.kubernetes.io/name:kong app.kubernetes.io/version:3.9 enable-metrics:true helm.sh/chart:kong-2.52.0] map[meta.helm.sh/release-name:kubernetes-dashboard meta.helm.sh/release-namespace:kubernetes-dashboard] [] [] [{helm Update v1 2025-12-19 02:36:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/name":{},"f:app.kubernetes.io/version":{},"f:enable-metrics":{},"f:helm.sh/chart":{}}},"f:spec":{"f:externalTrafficPolicy":{},"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":443,\"protocol\":\"TCP\"}":{".
":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:kong-proxy-tls,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:30386,AppProtocol:nil,},},Selector:map[string]string{app.kubernetes.io/component: app,app.kubernetes.io/instance: kubernetes-dashboard,app.kubernetes.io/name: kong,},ClusterIP:10.105.123.60,Type:NodePort,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:Cluster,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.105.123.60],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
I1219 02:37:09.236734  327111 host.go:66] Checking if "functional-453239" exists ...
I1219 02:37:09.236958  327111 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-453239
I1219 02:37:09.255355  327111 kapi.go:59] client config for functional-453239: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-453239/client.crt", KeyFile:"/home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-453239/client.key", CAFile:"/home/jenkins/minikube-integration/22230-253859/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2863880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1219 02:37:09.262552  327111 warnings.go:110] "Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice"
I1219 02:37:09.265944  327111 warnings.go:110] "Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice"
I1219 02:37:09.268940  327111 warnings.go:110] "Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice"
I1219 02:37:09.272139  327111 warnings.go:110] "Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice"
I1219 02:37:09.458440  327111 warnings.go:110] "Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice"
I1219 02:37:09.519035  327111 out.go:179] * Dashboard Token:
I1219 02:37:09.520234  327111 out.go:203] eyJhbGciOiJSUzI1NiIsImtpZCI6InNVM0cxUFFKbVo4SjhfZWIwblB6SnJKRnE2ZmNpSjBCWFNOY1RWdEJwVFUifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzY2MTk4MjI5LCJpYXQiOjE3NjYxMTE4MjksImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiYzlhNTgzODgtYTY0YS00NGI4LTgxZjgtOTVjNDUyNWZjMmM0Iiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiNjE2YmMxMTMtYTM1Yy00NjBjLWJhNDAtODMwNjljYTIwYjE3In19LCJuYmYiOjE3NjYxMTE4MjksInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbi11c2VyIn0.Mgfv0uMOyxOExmkzYc-7tFWC4JTFtyM0xR1RJB5Xu2F8ei7tuCkRAViBoX6Mo4rAOvTzzd4KG2Ot19iPbBOONQt7a4MB4UX5RdimWyhX6PTm3TpPX92D7B7M447v3N-hLd2H5D3ydimp4DtGxlXSQb-J759u_-KPRDRJ81i-6xrJrdMB8C0j6ci5sLceCit9aPdKELpnZ1O_DgGlYvoNih12lNx5oqvct5ULL_9TLB9Iu7OGMqXTvdaoATLdW2CRRmyqvEHomSp-JUMmlTvJFCWH2MUvUb0I2OOo62t5iUzLX9V7crNYALVXc_QYFzVpioTMxRAw7lDIHePNER7J2A
I1219 02:37:09.521383  327111 out.go:203] https://192.168.49.2:30386
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-453239
helpers_test.go:244: (dbg) docker inspect functional-453239:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e1b36093c768400cc6894890adcf639808be5cdae49e3007511791ab4cecbd47",
	        "Created": "2025-12-19T02:34:44.529398444Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 310942,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-19T02:34:44.56301878Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/e1b36093c768400cc6894890adcf639808be5cdae49e3007511791ab4cecbd47/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e1b36093c768400cc6894890adcf639808be5cdae49e3007511791ab4cecbd47/hostname",
	        "HostsPath": "/var/lib/docker/containers/e1b36093c768400cc6894890adcf639808be5cdae49e3007511791ab4cecbd47/hosts",
	        "LogPath": "/var/lib/docker/containers/e1b36093c768400cc6894890adcf639808be5cdae49e3007511791ab4cecbd47/e1b36093c768400cc6894890adcf639808be5cdae49e3007511791ab4cecbd47-json.log",
	        "Name": "/functional-453239",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-453239:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-453239",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e1b36093c768400cc6894890adcf639808be5cdae49e3007511791ab4cecbd47",
	                "LowerDir": "/var/lib/docker/overlay2/f309d28962450a657bd1405e09070089e4a7a2052cde1f78396ace994a6dd282-init/diff:/var/lib/docker/overlay2/68e8325308c9e4650215fd35d4b00e1f54e6ac5929641a1bc8ed2d512448afbd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f309d28962450a657bd1405e09070089e4a7a2052cde1f78396ace994a6dd282/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f309d28962450a657bd1405e09070089e4a7a2052cde1f78396ace994a6dd282/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f309d28962450a657bd1405e09070089e4a7a2052cde1f78396ace994a6dd282/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-453239",
	                "Source": "/var/lib/docker/volumes/functional-453239/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-453239",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-453239",
	                "name.minikube.sigs.k8s.io": "functional-453239",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "4298f4970f33b80ce86738749abda788a63e6397463f10b38653a9e7209ee547",
	            "SandboxKey": "/var/run/docker/netns/4298f4970f33",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-453239": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2c5279ef1e8d53adc9661b9c8129fc6d9db42e6b9618b9fad089df3688de4d6b",
	                    "EndpointID": "1323d7e893f1ff9ee9597b5e2df1d9ebdc0a3cd1a760104e74d87615c6d1fe43",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "a6:de:eb:a3:85:10",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-453239",
	                        "e1b36093c768"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-453239 -n functional-453239
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-453239 logs -n 25: (1.439762171s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-453239 ssh -- ls -la /mount-9p                                                                                                         │ functional-453239 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │ 19 Dec 25 02:36 UTC │
	│ ssh            │ functional-453239 ssh sudo umount -f /mount-9p                                                                                                    │ functional-453239 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │                     │
	│ start          │ -p functional-453239 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ functional-453239 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │                     │
	│ start          │ -p functional-453239 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1           │ functional-453239 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │                     │
	│ ssh            │ functional-453239 ssh findmnt -T /mount1                                                                                                          │ functional-453239 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │                     │
	│ mount          │ -p functional-453239 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1593327545/001:/mount3 --alsologtostderr -v=1              │ functional-453239 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │                     │
	│ mount          │ -p functional-453239 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1593327545/001:/mount2 --alsologtostderr -v=1              │ functional-453239 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │                     │
	│ mount          │ -p functional-453239 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1593327545/001:/mount1 --alsologtostderr -v=1              │ functional-453239 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │                     │
	│ start          │ -p functional-453239 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ functional-453239 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-453239 --alsologtostderr -v=1                                                                                    │ functional-453239 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │ 19 Dec 25 02:37 UTC │
	│ ssh            │ functional-453239 ssh findmnt -T /mount1                                                                                                          │ functional-453239 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │ 19 Dec 25 02:36 UTC │
	│ ssh            │ functional-453239 ssh findmnt -T /mount2                                                                                                          │ functional-453239 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │ 19 Dec 25 02:36 UTC │
	│ ssh            │ functional-453239 ssh findmnt -T /mount3                                                                                                          │ functional-453239 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │ 19 Dec 25 02:36 UTC │
	│ mount          │ -p functional-453239 --kill=true                                                                                                                  │ functional-453239 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │                     │
	│ update-context │ functional-453239 update-context --alsologtostderr -v=2                                                                                           │ functional-453239 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │ 19 Dec 25 02:36 UTC │
	│ update-context │ functional-453239 update-context --alsologtostderr -v=2                                                                                           │ functional-453239 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │ 19 Dec 25 02:36 UTC │
	│ update-context │ functional-453239 update-context --alsologtostderr -v=2                                                                                           │ functional-453239 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │ 19 Dec 25 02:36 UTC │
	│ image          │ functional-453239 image ls --format short --alsologtostderr                                                                                       │ functional-453239 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │ 19 Dec 25 02:36 UTC │
	│ image          │ functional-453239 image ls --format yaml --alsologtostderr                                                                                        │ functional-453239 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │ 19 Dec 25 02:36 UTC │
	│ ssh            │ functional-453239 ssh pgrep buildkitd                                                                                                             │ functional-453239 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │                     │
	│ image          │ functional-453239 image build -t localhost/my-image:functional-453239 testdata/build --alsologtostderr                                            │ functional-453239 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │ 19 Dec 25 02:36 UTC │
	│ image          │ functional-453239 image ls --format json --alsologtostderr                                                                                        │ functional-453239 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │ 19 Dec 25 02:36 UTC │
	│ image          │ functional-453239 image ls --format table --alsologtostderr                                                                                       │ functional-453239 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │ 19 Dec 25 02:36 UTC │
	│ service        │ functional-453239 service hello-node-connect --url                                                                                                │ functional-453239 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │ 19 Dec 25 02:36 UTC │
	│ image          │ functional-453239 image ls                                                                                                                        │ functional-453239 │ jenkins │ v1.37.0 │ 19 Dec 25 02:36 UTC │ 19 Dec 25 02:36 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 02:36:45
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 02:36:45.045876  326739 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:36:45.046038  326739 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:36:45.046045  326739 out.go:374] Setting ErrFile to fd 2...
	I1219 02:36:45.046052  326739 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:36:45.046567  326739 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-253859/.minikube/bin
	I1219 02:36:45.050712  326739 out.go:368] Setting JSON to false
	I1219 02:36:45.052268  326739 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4744,"bootTime":1766107061,"procs":276,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 02:36:45.052408  326739 start.go:143] virtualization: kvm guest
	I1219 02:36:45.055265  326739 out.go:179] * [functional-453239] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1219 02:36:45.056488  326739 notify.go:221] Checking for updates...
	I1219 02:36:45.056508  326739 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 02:36:45.057792  326739 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 02:36:45.060993  326739 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-253859/kubeconfig
	I1219 02:36:45.062484  326739 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-253859/.minikube
	I1219 02:36:45.067870  326739 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 02:36:45.069174  326739 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 02:36:45.070960  326739 config.go:182] Loaded profile config "functional-453239": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1219 02:36:45.072730  326739 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 02:36:45.105543  326739 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 02:36:45.105663  326739 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 02:36:45.182027  326739 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-19 02:36:45.169952659 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 02:36:45.182207  326739 docker.go:319] overlay module found
	I1219 02:36:45.184459  326739 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1219 02:36:45.185993  326739 start.go:309] selected driver: docker
	I1219 02:36:45.186016  326739 start.go:928] validating driver "docker" against &{Name:functional-453239 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-453239 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 02:36:45.186160  326739 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 02:36:45.188108  326739 out.go:203] 
	W1219 02:36:45.189052  326739 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1219 02:36:45.189961  326739 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                                   ATTEMPT             POD ID              POD                                                     NAMESPACE
	104b3d9e0393c       59f642f485d26       2 seconds ago        Running             kubernetes-dashboard-web               0                   595d81ff524bc       kubernetes-dashboard-web-7f7574785f-86h99               kubernetes-dashboard
	6db3c0367aeed       3a975970da2f5       5 seconds ago        Running             proxy                                  0                   e41df3e436e30       kubernetes-dashboard-kong-78b7499b45-n429j              kubernetes-dashboard
	58dcb40c4d219       3a975970da2f5       6 seconds ago        Exited              clear-stale-pid                        0                   e41df3e436e30       kubernetes-dashboard-kong-78b7499b45-n429j              kubernetes-dashboard
	74f98b56a5d4d       a0607af4fcd8a       12 seconds ago       Running             kubernetes-dashboard-api               0                   7f74f1bca15e8       kubernetes-dashboard-api-77fcfc9fc6-nmbkz               kubernetes-dashboard
	242a586260b4f       d9cbc9f4053ca       15 seconds ago       Running             kubernetes-dashboard-metrics-scraper   0                   d64ae580db0ed       kubernetes-dashboard-metrics-scraper-594bbfb84b-6hdd2   kubernetes-dashboard
	87d8168670a1c       dd54374d0ab14       17 seconds ago       Running             kubernetes-dashboard-auth              0                   7a458521669d6       kubernetes-dashboard-auth-fdf56577f-r7cx9               kubernetes-dashboard
	b2805adf90c2a       9056ab77afb8e       21 seconds ago       Running             echo-server                            0                   0c1563bdcabdc       hello-node-connect-9f67c86d4-gz2l2                      default
	c1b12e4acccbe       04da2b0513cd7       22 seconds ago       Running             myfrontend                             0                   43168e1670692       sp-pod                                                  default
	49e6ce076c2dd       56cc512116c8f       31 seconds ago       Exited              mount-munger                           0                   a4895364ed6b5       busybox-mount                                           default
	7d0919301cf13       04da2b0513cd7       34 seconds ago       Running             nginx                                  0                   e6f3fcfcdb3cb       nginx-svc                                               default
	c26bd11ec8688       20d0be4ee4524       37 seconds ago       Running             mysql                                  0                   b75705b6e96e1       mysql-7d7b65bc95-5zm5q                                  default
	d57ba3e022d43       9056ab77afb8e       48 seconds ago       Running             echo-server                            0                   02ecc522bbcf9       hello-node-5758569b79-vfvcs                             default
	31dba213bbcdc       58865405a13bc       About a minute ago   Running             kube-apiserver                         0                   48da9e3357c64       kube-apiserver-functional-453239                        kube-system
	72b1c3cf1566c       0a108f7189562       About a minute ago   Running             etcd                                   1                   ded4292d24ae4       etcd-functional-453239                                  kube-system
	da17d6eff0bcf       5032a56602e1b       About a minute ago   Running             kube-controller-manager                1                   11383fe231ee0       kube-controller-manager-functional-453239               kube-system
	efd1f1bff139f       73f80cdc073da       About a minute ago   Running             kube-scheduler                         1                   e4b08ce9ea722       kube-scheduler-functional-453239                        kube-system
	7da63cf537287       6e38f40d628db       About a minute ago   Running             storage-provisioner                    1                   d2fda152ce5b0       storage-provisioner                                     kube-system
	568e4e5b5e53b       4921d7a6dffa9       About a minute ago   Running             kindnet-cni                            1                   3d79a71e65a7a       kindnet-xtnn4                                           kube-system
	5e8fa8b7d21c1       aa5e3ebc0dfed       About a minute ago   Running             coredns                                1                   e2d530f4a38c7       coredns-7d764666f9-42n5t                                kube-system
	b1154418e7182       af0321f3a4f38       About a minute ago   Running             kube-proxy                             1                   ae5695a4c497d       kube-proxy-9hqwh                                        kube-system
	c1f971e8b7db5       aa5e3ebc0dfed       About a minute ago   Exited              coredns                                0                   e2d530f4a38c7       coredns-7d764666f9-42n5t                                kube-system
	b82f571614b3a       6e38f40d628db       About a minute ago   Exited              storage-provisioner                    0                   d2fda152ce5b0       storage-provisioner                                     kube-system
	54492795713cb       4921d7a6dffa9       2 minutes ago        Exited              kindnet-cni                            0                   3d79a71e65a7a       kindnet-xtnn4                                           kube-system
	fcf469adf3cfb       af0321f3a4f38       2 minutes ago        Exited              kube-proxy                             0                   ae5695a4c497d       kube-proxy-9hqwh                                        kube-system
	5d54f6a265dfa       73f80cdc073da       2 minutes ago        Exited              kube-scheduler                         0                   e4b08ce9ea722       kube-scheduler-functional-453239                        kube-system
	4d6298517293f       5032a56602e1b       2 minutes ago        Exited              kube-controller-manager                0                   11383fe231ee0       kube-controller-manager-functional-453239               kube-system
	3f54046669b15       0a108f7189562       2 minutes ago        Exited              etcd                                   0                   ded4292d24ae4       etcd-functional-453239                                  kube-system
	
	
	==> containerd <==
	Dec 19 02:37:07 functional-453239 containerd[3781]: time="2025-12-19T02:37:07.097073976Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda73a080ba86f773aa786c705f16ad2d2.slice/cri-containerd-efd1f1bff139fd7951f5a299bb601074023d4166c3dabcbbf8d9174948039abe.scope/hugetlb.1GB.events\""
	Dec 19 02:37:07 functional-453239 containerd[3781]: time="2025-12-19T02:37:07.097948744Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode857a9b62a9c83e8c507d64541e418ee.slice/cri-containerd-da17d6eff0bcf1f6fe0ba4bf9b895933d2446efcf2bebf984cd29525b739de9c.scope/hugetlb.2MB.events\""
	Dec 19 02:37:07 functional-453239 containerd[3781]: time="2025-12-19T02:37:07.098039631Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode857a9b62a9c83e8c507d64541e418ee.slice/cri-containerd-da17d6eff0bcf1f6fe0ba4bf9b895933d2446efcf2bebf984cd29525b739de9c.scope/hugetlb.1GB.events\""
	Dec 19 02:37:07 functional-453239 containerd[3781]: time="2025-12-19T02:37:07.098872591Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1d90ad8_fcb7_4bc8_b0d0_b07bd153ea62.slice/cri-containerd-87d8168670a1cde5cc36711a9ae41d861b4f94aef7f6406cd94cc473c6e3affc.scope/hugetlb.2MB.events\""
	Dec 19 02:37:07 functional-453239 containerd[3781]: time="2025-12-19T02:37:07.099003431Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1d90ad8_fcb7_4bc8_b0d0_b07bd153ea62.slice/cri-containerd-87d8168670a1cde5cc36711a9ae41d861b4f94aef7f6406cd94cc473c6e3affc.scope/hugetlb.1GB.events\""
	Dec 19 02:37:07 functional-453239 containerd[3781]: time="2025-12-19T02:37:07.099773794Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78ac49b2_aac0_47fe_9d11_8196ade9d6fd.slice/cri-containerd-5e8fa8b7d21c13a55afc4100b820ab231d9b09f2eaeb8166a9ec45d2a5fa31f5.scope/hugetlb.2MB.events\""
	Dec 19 02:37:07 functional-453239 containerd[3781]: time="2025-12-19T02:37:07.099876991Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78ac49b2_aac0_47fe_9d11_8196ade9d6fd.slice/cri-containerd-5e8fa8b7d21c13a55afc4100b820ab231d9b09f2eaeb8166a9ec45d2a5fa31f5.scope/hugetlb.1GB.events\""
	Dec 19 02:37:07 functional-453239 containerd[3781]: time="2025-12-19T02:37:07.100666552Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ea556cd_cb1c_4117_a3b3_9e2dfdc9612c.slice/cri-containerd-242a586260b4f835d77b053ad8f8b109fd388692c1f67798c5ed0bf96dcca6a0.scope/hugetlb.2MB.events\""
	Dec 19 02:37:07 functional-453239 containerd[3781]: time="2025-12-19T02:37:07.100754871Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ea556cd_cb1c_4117_a3b3_9e2dfdc9612c.slice/cri-containerd-242a586260b4f835d77b053ad8f8b109fd388692c1f67798c5ed0bf96dcca6a0.scope/hugetlb.1GB.events\""
	Dec 19 02:37:07 functional-453239 containerd[3781]: time="2025-12-19T02:37:07.101469873Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd85af8e2_426b_4304_b59a_4396fbe9bcfd.slice/cri-containerd-c1b12e4acccbe8a9d27ef18fa7e68ca7d6adea179fa17b25e4947b43edc92696.scope/hugetlb.2MB.events\""
	Dec 19 02:37:07 functional-453239 containerd[3781]: time="2025-12-19T02:37:07.101610695Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd85af8e2_426b_4304_b59a_4396fbe9bcfd.slice/cri-containerd-c1b12e4acccbe8a9d27ef18fa7e68ca7d6adea179fa17b25e4947b43edc92696.scope/hugetlb.1GB.events\""
	Dec 19 02:37:07 functional-453239 containerd[3781]: time="2025-12-19T02:37:07.102505891Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38abfc15_b37b_4404_b547_4813fcb6679b.slice/cri-containerd-74f98b56a5d4dbe6084aadb4d25432cd0f9f23e98743d3fc0ec0d1f9507c4718.scope/hugetlb.2MB.events\""
	Dec 19 02:37:07 functional-453239 containerd[3781]: time="2025-12-19T02:37:07.102723494Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38abfc15_b37b_4404_b547_4813fcb6679b.slice/cri-containerd-74f98b56a5d4dbe6084aadb4d25432cd0f9f23e98743d3fc0ec0d1f9507c4718.scope/hugetlb.1GB.events\""
	Dec 19 02:37:08 functional-453239 containerd[3781]: time="2025-12-19T02:37:08.101826905Z" level=info msg="ImageCreate event name:\"docker.io/kubernetesui/dashboard-web:1.7.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 19 02:37:08 functional-453239 containerd[3781]: time="2025-12-19T02:37:08.102507574Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard-web:1.7.0: active requests=0, bytes read=62507990"
	Dec 19 02:37:08 functional-453239 containerd[3781]: time="2025-12-19T02:37:08.103720015Z" level=info msg="ImageCreate event name:\"sha256:59f642f485d26d479d2dedc7c6139f5ce41939fa22c1152314fefdf3c463aa06\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 19 02:37:08 functional-453239 containerd[3781]: time="2025-12-19T02:37:08.106397521Z" level=info msg="ImageCreate event name:\"docker.io/kubernetesui/dashboard-web@sha256:cc7c31bd2d8470e3590dcb20fe980769b43054b31a5c5c0da606e9add898d85d\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 19 02:37:08 functional-453239 containerd[3781]: time="2025-12-19T02:37:08.107452396Z" level=info msg="Pulled image \"docker.io/kubernetesui/dashboard-web:1.7.0\" with image id \"sha256:59f642f485d26d479d2dedc7c6139f5ce41939fa22c1152314fefdf3c463aa06\", repo tag \"docker.io/kubernetesui/dashboard-web:1.7.0\", repo digest \"docker.io/kubernetesui/dashboard-web@sha256:cc7c31bd2d8470e3590dcb20fe980769b43054b31a5c5c0da606e9add898d85d\", size \"62497108\" in 4.079427454s"
	Dec 19 02:37:08 functional-453239 containerd[3781]: time="2025-12-19T02:37:08.107488268Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard-web:1.7.0\" returns image reference \"sha256:59f642f485d26d479d2dedc7c6139f5ce41939fa22c1152314fefdf3c463aa06\""
	Dec 19 02:37:08 functional-453239 containerd[3781]: time="2025-12-19T02:37:08.112949527Z" level=info msg="CreateContainer within sandbox \"595d81ff524bc7b5d70b623528ea102281180bf2de5f5ca888e96a936246b8be\" for container name:\"kubernetes-dashboard-web\""
	Dec 19 02:37:08 functional-453239 containerd[3781]: time="2025-12-19T02:37:08.118445935Z" level=info msg="Container 104b3d9e0393ceb8225dee7755c8bcc888cf06d98ca1b1888071daa3fd65a516: CDI devices from CRI Config.CDIDevices: []"
	Dec 19 02:37:08 functional-453239 containerd[3781]: time="2025-12-19T02:37:08.125987103Z" level=info msg="CreateContainer within sandbox \"595d81ff524bc7b5d70b623528ea102281180bf2de5f5ca888e96a936246b8be\" for name:\"kubernetes-dashboard-web\" returns container id \"104b3d9e0393ceb8225dee7755c8bcc888cf06d98ca1b1888071daa3fd65a516\""
	Dec 19 02:37:08 functional-453239 containerd[3781]: time="2025-12-19T02:37:08.126673898Z" level=info msg="StartContainer for \"104b3d9e0393ceb8225dee7755c8bcc888cf06d98ca1b1888071daa3fd65a516\""
	Dec 19 02:37:08 functional-453239 containerd[3781]: time="2025-12-19T02:37:08.127764943Z" level=info msg="connecting to shim 104b3d9e0393ceb8225dee7755c8bcc888cf06d98ca1b1888071daa3fd65a516" address="unix:///run/containerd/s/3d405abf0bcb590e8dc7a9c58ba5a2e337f3168e028155d1619ce61dbd799926" protocol=ttrpc version=3
	Dec 19 02:37:08 functional-453239 containerd[3781]: time="2025-12-19T02:37:08.259861281Z" level=info msg="StartContainer for \"104b3d9e0393ceb8225dee7755c8bcc888cf06d98ca1b1888071daa3fd65a516\" returns successfully"
	
	
	==> coredns [5e8fa8b7d21c13a55afc4100b820ab231d9b09f2eaeb8166a9ec45d2a5fa31f5] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:56624 - 17577 "HINFO IN 3774970402640480661.2308082880724294518. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.031962796s
	
	
	==> coredns [c1f971e8b7db50737146a0960ebe665a809962a49b8006ef5d64ebc9c2a135f3] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:36039 - 42302 "HINFO IN 620711762965291012.5651434243801939214. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.046131559s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-453239
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-453239
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=functional-453239
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T02_34_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 02:34:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-453239
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 02:37:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 02:37:00 +0000   Fri, 19 Dec 2025 02:34:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 02:37:00 +0000   Fri, 19 Dec 2025 02:34:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 02:37:00 +0000   Fri, 19 Dec 2025 02:34:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 02:37:00 +0000   Fri, 19 Dec 2025 02:35:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-453239
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                6c8b357a-b82d-4733-9604-af7f4c6e8744
	  Boot ID:                    a0dec9bb-d63c-4dc5-9036-bbcaf9f2c6be
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.2.0
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (18 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-5758569b79-vfvcs                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  default                     hello-node-connect-9f67c86d4-gz2l2                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  default                     mysql-7d7b65bc95-5zm5q                                   600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     49s
	  default                     nginx-svc                                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  default                     sp-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  kube-system                 coredns-7d764666f9-42n5t                                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m10s
	  kube-system                 etcd-functional-453239                                   100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m16s
	  kube-system                 kindnet-xtnn4                                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m10s
	  kube-system                 kube-apiserver-functional-453239                         250m (3%)     0 (0%)      0 (0%)           0 (0%)         71s
	  kube-system                 kube-controller-manager-functional-453239                200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m16s
	  kube-system                 kube-proxy-9hqwh                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 kube-scheduler-functional-453239                         100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m16s
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m10s
	  kubernetes-dashboard        kubernetes-dashboard-api-77fcfc9fc6-nmbkz                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     21s
	  kubernetes-dashboard        kubernetes-dashboard-auth-fdf56577f-r7cx9                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     21s
	  kubernetes-dashboard        kubernetes-dashboard-kong-78b7499b45-n429j               0 (0%)        0 (0%)      0 (0%)           0 (0%)         21s
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-594bbfb84b-6hdd2    100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     21s
	  kubernetes-dashboard        kubernetes-dashboard-web-7f7574785f-86h99                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1850m (23%)  1800m (22%)
	  memory             1532Mi (4%)  2520Mi (7%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  2m12s  node-controller  Node functional-453239 event: Registered Node functional-453239 in Controller
	  Normal  RegisteredNode  82s    node-controller  Node functional-453239 event: Registered Node functional-453239 in Controller
	
	
	==> dmesg <==
	[Dec19 01:17] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001886] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.395482] i8042: Warning: Keylock active
	[  +0.012710] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497460] block sda: the capability attribute has been deprecated.
	[  +0.080392] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.020963] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.499240] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [3f54046669b157a19328e05ce5672cded977db9f4876211b2f7259a6308af40c] <==
	{"level":"info","ts":"2025-12-19T02:34:51.123034Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-19T02:34:51.123036Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-19T02:34:51.123202Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-19T02:34:51.124046Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-19T02:34:51.124081Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-19T02:34:51.127463Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2025-12-19T02:34:51.127464Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-19T02:35:38.793716Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-19T02:35:38.793796Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-453239","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-12-19T02:35:38.793956Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-19T02:35:38.794059Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-19T02:35:45.795948Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-19T02:35:45.796031Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-12-19T02:35:45.796026Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-12-19T02:35:45.796078Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-19T02:35:45.796059Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-19T02:35:45.796091Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-19T02:35:45.796114Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-19T02:35:45.796125Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-12-19T02:35:45.796129Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-19T02:35:45.796098Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-19T02:35:45.798969Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-12-19T02:35:45.799042Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-19T02:35:45.799100Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-12-19T02:35:45.799134Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-453239","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [72b1c3cf1566cbe356c133f79a92ed409ebc44b70582592072b5bb8a503e4356] <==
	{"level":"info","ts":"2025-12-19T02:35:46.020300Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-19T02:35:46.020364Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-19T02:35:46.613032Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-19T02:35:46.613101Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-19T02:35:46.613181Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2025-12-19T02:35:46.613200Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-19T02:35:46.613225Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2025-12-19T02:35:46.615317Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-12-19T02:35:46.615402Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-19T02:35:46.615434Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2025-12-19T02:35:46.615448Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-12-19T02:35:46.616567Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-453239 ClientURLs:[https://192.168.49.2:2379]}","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-19T02:35:46.616571Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-19T02:35:46.616604Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-19T02:35:46.616813Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-19T02:35:46.616860Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-19T02:35:46.619127Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-19T02:35:46.619283Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-19T02:35:46.621268Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-19T02:35:46.621270Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2025-12-19T02:36:25.581656Z","caller":"traceutil/trace.go:172","msg":"trace[872214761] linearizableReadLoop","detail":"{readStateIndex:726; appliedIndex:726; }","duration":"120.755555ms","start":"2025-12-19T02:36:25.460874Z","end":"2025-12-19T02:36:25.581630Z","steps":["trace[872214761] 'read index received'  (duration: 120.745168ms)","trace[872214761] 'applied index is now lower than readState.Index'  (duration: 9.165µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-19T02:36:25.581806Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"120.88682ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T02:36:25.581899Z","caller":"traceutil/trace.go:172","msg":"trace[1576369158] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:675; }","duration":"121.023989ms","start":"2025-12-19T02:36:25.460863Z","end":"2025-12-19T02:36:25.581887Z","steps":["trace[1576369158] 'agreement among raft nodes before linearized reading'  (duration: 120.841643ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T02:36:25.581847Z","caller":"traceutil/trace.go:172","msg":"trace[2033907740] transaction","detail":"{read_only:false; response_revision:676; number_of_response:1; }","duration":"144.47299ms","start":"2025-12-19T02:36:25.437352Z","end":"2025-12-19T02:36:25.581825Z","steps":["trace[2033907740] 'process raft request'  (duration: 144.334387ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T02:36:57.868753Z","caller":"traceutil/trace.go:172","msg":"trace[95994795] transaction","detail":"{read_only:false; response_revision:951; number_of_response:1; }","duration":"139.828367ms","start":"2025-12-19T02:36:57.728902Z","end":"2025-12-19T02:36:57.868730Z","steps":["trace[95994795] 'process raft request'  (duration: 139.665406ms)"],"step_count":1}
	
	
	==> kernel <==
	 02:37:10 up  1:19,  0 user,  load average: 1.96, 7.93, 45.13
	Linux functional-453239 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [54492795713cb46514ef77bfc6b0bd60c6d6a5f42189ab5340dd776dd95ad087] <==
	I1219 02:35:03.905076       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1219 02:35:03.905367       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1219 02:35:03.905512       1 main.go:148] setting mtu 1500 for CNI 
	I1219 02:35:03.905539       1 main.go:178] kindnetd IP family: "ipv4"
	I1219 02:35:03.905569       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-19T02:35:04Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1219 02:35:04.127516       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1219 02:35:04.127607       1 controller.go:381] "Waiting for informer caches to sync"
	I1219 02:35:04.127629       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1219 02:35:04.201462       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1219 02:35:04.427923       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1219 02:35:04.427950       1 metrics.go:72] Registering metrics
	I1219 02:35:04.427995       1 controller.go:711] "Syncing nftables rules"
	I1219 02:35:14.129262       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:35:14.129364       1 main.go:301] handling current node
	I1219 02:35:24.134767       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:35:24.134821       1 main.go:301] handling current node
	I1219 02:35:34.131988       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:35:34.132037       1 main.go:301] handling current node
	
	
	==> kindnet [568e4e5b5e53b7b35b0dac9df5f400789c1c195dd137dae86ca1f03a283975ed] <==
	I1219 02:35:39.504137       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1219 02:35:39.504183       1 controller.go:381] "Waiting for informer caches to sync"
	I1219 02:35:39.504201       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1219 02:35:39.504367       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1219 02:35:39.804447       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1219 02:35:39.804635       1 metrics.go:72] Registering metrics
	I1219 02:35:39.804721       1 controller.go:711] "Syncing nftables rules"
	I1219 02:35:49.505065       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:35:49.505126       1 main.go:301] handling current node
	I1219 02:35:59.504313       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:35:59.504367       1 main.go:301] handling current node
	I1219 02:36:09.505076       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:36:09.505120       1 main.go:301] handling current node
	I1219 02:36:19.504977       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:36:19.505072       1 main.go:301] handling current node
	I1219 02:36:29.505680       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:36:29.505738       1 main.go:301] handling current node
	I1219 02:36:39.504874       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:36:39.504916       1 main.go:301] handling current node
	I1219 02:36:49.504231       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:36:49.504282       1 main.go:301] handling current node
	I1219 02:36:59.505069       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:36:59.505126       1 main.go:301] handling current node
	I1219 02:37:09.504111       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:37:09.504159       1 main.go:301] handling current node
	
	
	==> kube-apiserver [31dba213bbcdc173e7b0867efb2986b198bc3766d0ec6a7f110d6e92e25fc51a] <==
	I1219 02:36:47.494632       1 handler.go:304] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	I1219 02:36:47.503382       1 handler.go:304] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	I1219 02:36:47.511456       1 handler.go:304] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
	I1219 02:36:47.519510       1 handler.go:304] Adding GroupVersion configuration.konghq.com v1beta1 to ResourceManager
	I1219 02:36:47.535462       1 handler.go:304] Adding GroupVersion configuration.konghq.com v1beta1 to ResourceManager
	W1219 02:36:49.293356       1 logging.go:55] [core] [Channel #262 SubChannel #263]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:36:49.302870       1 logging.go:55] [core] [Channel #266 SubChannel #267]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:36:49.319170       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:36:49.328967       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:36:49.340358       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:36:49.351897       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:36:49.370125       1 logging.go:55] [core] [Channel #286 SubChannel #287]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:36:49.383005       1 logging.go:55] [core] [Channel #290 SubChannel #291]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:36:49.392551       1 logging.go:55] [core] [Channel #294 SubChannel #295]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:36:49.492442       1 logging.go:55] [core] [Channel #298 SubChannel #299]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:36:49.502137       1 logging.go:55] [core] [Channel #302 SubChannel #303]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:36:49.513845       1 logging.go:55] [core] [Channel #306 SubChannel #307]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	I1219 02:36:49.884101       1 controller.go:667] quota admission added evaluator for: namespaces
	I1219 02:36:49.943745       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-kong-proxy" clusterIPs={"IPv4":"10.105.123.60"}
	I1219 02:36:49.944223       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-api" clusterIPs={"IPv4":"10.101.140.170"}
	I1219 02:36:49.950063       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.183.174"}
	I1219 02:36:49.957694       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-auth" clusterIPs={"IPv4":"10.103.228.32"}
	I1219 02:36:49.960610       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-web" clusterIPs={"IPv4":"10.101.241.131"}
	E1219 02:36:52.904046       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:34592: use of closed network connection
	E1219 02:36:55.670730       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:44318: use of closed network connection
	
	
	==> kube-controller-manager [4d6298517293f183e15010fa97e109926a26f05ed5d543bae7079a3dbb413580] <==
	I1219 02:34:58.948891       1 shared_informer.go:377] "Caches are synced"
	I1219 02:34:58.948898       1 shared_informer.go:377] "Caches are synced"
	I1219 02:34:58.948920       1 shared_informer.go:377] "Caches are synced"
	I1219 02:34:58.949158       1 shared_informer.go:377] "Caches are synced"
	I1219 02:34:58.949171       1 shared_informer.go:377] "Caches are synced"
	I1219 02:34:58.949185       1 shared_informer.go:377] "Caches are synced"
	I1219 02:34:58.949223       1 range_allocator.go:177] "Sending events to api server"
	I1219 02:34:58.949304       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1219 02:34:58.949315       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 02:34:58.949321       1 shared_informer.go:377] "Caches are synced"
	I1219 02:34:58.949666       1 shared_informer.go:377] "Caches are synced"
	I1219 02:34:58.950390       1 shared_informer.go:377] "Caches are synced"
	I1219 02:34:58.950396       1 shared_informer.go:377] "Caches are synced"
	I1219 02:34:58.951360       1 shared_informer.go:377] "Caches are synced"
	I1219 02:34:58.951509       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1219 02:34:58.951625       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-453239"
	I1219 02:34:58.951681       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1219 02:34:58.953605       1 shared_informer.go:377] "Caches are synced"
	I1219 02:34:58.954496       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 02:34:58.959013       1 range_allocator.go:433] "Set node PodCIDR" node="functional-453239" podCIDRs=["10.244.0.0/24"]
	I1219 02:34:59.049511       1 shared_informer.go:377] "Caches are synced"
	I1219 02:34:59.049536       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1219 02:34:59.049541       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1219 02:34:59.054950       1 shared_informer.go:377] "Caches are synced"
	I1219 02:35:18.956134       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-controller-manager [da17d6eff0bcf1f6fe0ba4bf9b895933d2446efcf2bebf984cd29525b739de9c] <==
	E1219 02:35:58.997888       1 reflector.go:204] "Failed to watch" err="apiservices.apiregistration.k8s.io is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"apiservices\" in API group \"apiregistration.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1219 02:35:58.997954       1 reflector.go:204] "Failed to watch" err="configmaps is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"configmaps\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ConfigMap"
	E1219 02:35:58.998003       1 reflector.go:204] "Failed to watch" err="podtemplates is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"podtemplates\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodTemplate"
	E1219 02:35:58.998051       1 reflector.go:204] "Failed to watch" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1219 02:35:58.998094       1 reflector.go:204] "Failed to watch" err="persistentvolumes is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1219 02:35:58.998136       1 reflector.go:204] "Failed to watch" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1219 02:35:58.998181       1 reflector.go:204] "Failed to watch" err="statefulsets.apps is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1219 02:35:58.998226       1 reflector.go:204] "Failed to watch" err="flowschemas.flowcontrol.apiserver.k8s.io is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"flowschemas\" in API group \"flowcontrol.apiserver.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.FlowSchema"
	E1219 02:35:58.998277       1 reflector.go:204] "Failed to watch" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1219 02:35:58.998370       1 reflector.go:204] "Failed to watch" err="certificatesigningrequests.certificates.k8s.io is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"certificatesigningrequests\" in API group \"certificates.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CertificateSigningRequest"
	E1219 02:35:58.998432       1 reflector.go:204] "Failed to watch" err="ingresses.networking.k8s.io is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"ingresses\" in API group \"networking.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Ingress"
	E1219 02:35:58.999677       1 reflector.go:204] "Failed to watch" err="controllerrevisions.apps is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"controllerrevisions\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ControllerRevision"
	I1219 02:36:49.282088       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" resource="tcpingresses.configuration.konghq.com"
	I1219 02:36:49.282150       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" resource="kongupstreampolicies.configuration.konghq.com"
	I1219 02:36:49.282191       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" resource="kongingresses.configuration.konghq.com"
	I1219 02:36:49.282209       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" resource="udpingresses.configuration.konghq.com"
	I1219 02:36:49.282224       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" resource="kongconsumergroups.configuration.konghq.com"
	I1219 02:36:49.282251       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" resource="ingressclassparameterses.configuration.konghq.com"
	I1219 02:36:49.282293       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" resource="kongcustomentities.configuration.konghq.com"
	I1219 02:36:49.282342       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" resource="kongconsumers.configuration.konghq.com"
	I1219 02:36:49.282365       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" resource="kongplugins.configuration.konghq.com"
	I1219 02:36:49.282464       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 02:36:49.482453       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 02:36:50.483543       1 shared_informer.go:377] "Caches are synced"
	I1219 02:36:50.583107       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [b1154418e7182b289ac98571f29d5a050ed8b3806c9994ab39cfb4c59e75b742] <==
	I1219 02:35:39.043910       1 server_linux.go:53] "Using iptables proxy"
	I1219 02:35:39.109469       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 02:35:48.209678       1 shared_informer.go:377] "Caches are synced"
	I1219 02:35:48.209739       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1219 02:35:48.209934       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 02:35:48.234121       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1219 02:35:48.234196       1 server_linux.go:136] "Using iptables Proxier"
	I1219 02:35:48.240400       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 02:35:48.240887       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1219 02:35:48.240929       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 02:35:48.242477       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 02:35:48.242507       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 02:35:48.242492       1 config.go:200] "Starting service config controller"
	I1219 02:35:48.242531       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 02:35:48.242542       1 config.go:106] "Starting endpoint slice config controller"
	I1219 02:35:48.242550       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 02:35:48.242561       1 config.go:309] "Starting node config controller"
	I1219 02:35:48.242567       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 02:35:48.242575       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 02:35:48.343111       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1219 02:35:48.443614       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 02:35:48.742935       1 shared_informer.go:356] "Caches are synced" controller="service config"
	E1219 02:35:59.030019       1 reflector.go:204] "Failed to watch" err="services is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	
	
	==> kube-proxy [fcf469adf3cfb1ecbd97106fb97b11251211773fa5d11b364ec62d1840b4e878] <==
	I1219 02:35:00.708305       1 server_linux.go:53] "Using iptables proxy"
	I1219 02:35:00.774487       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 02:35:00.875362       1 shared_informer.go:377] "Caches are synced"
	I1219 02:35:00.875478       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1219 02:35:00.875638       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 02:35:00.898992       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1219 02:35:00.899075       1 server_linux.go:136] "Using iptables Proxier"
	I1219 02:35:00.904973       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 02:35:00.905418       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1219 02:35:00.905441       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 02:35:00.906703       1 config.go:200] "Starting service config controller"
	I1219 02:35:00.906725       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 02:35:00.906743       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 02:35:00.906753       1 config.go:106] "Starting endpoint slice config controller"
	I1219 02:35:00.906766       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 02:35:00.906773       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 02:35:00.906854       1 config.go:309] "Starting node config controller"
	I1219 02:35:00.906872       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 02:35:01.007421       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1219 02:35:01.007469       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 02:35:01.007474       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 02:35:01.007495       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [5d54f6a265dfa62527d982a41367f722d1c0597f839ea2606d1878dc8f805eba] <==
	E1219 02:34:52.175381       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1219 02:34:52.175487       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1219 02:34:52.175489       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1219 02:34:52.175558       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1219 02:34:52.175827       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1219 02:34:52.176163       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1219 02:34:52.176221       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1219 02:34:52.176434       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1219 02:34:52.984501       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1219 02:34:53.020050       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1219 02:34:53.157655       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1219 02:34:53.292964       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1219 02:34:53.310001       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1219 02:34:53.318168       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1219 02:34:53.361738       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1219 02:34:53.363484       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1219 02:34:53.396236       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1219 02:34:53.416462       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	I1219 02:34:55.168239       1 shared_informer.go:377] "Caches are synced"
	I1219 02:35:38.666687       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1219 02:35:38.666691       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 02:35:38.666854       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1219 02:35:38.666876       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1219 02:35:38.666919       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1219 02:35:38.666945       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [efd1f1bff139fd7951f5a299bb601074023d4166c3dabcbbf8d9174948039abe] <==
	I1219 02:35:39.346239       1 serving.go:386] Generated self-signed cert in-memory
	I1219 02:35:47.672548       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1219 02:35:47.672616       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 02:35:47.677391       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1219 02:35:47.677413       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 02:35:47.677436       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 02:35:47.677434       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1219 02:35:47.677433       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1219 02:35:47.677477       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 02:35:47.677441       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 02:35:47.677614       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1219 02:35:47.878476       1 shared_informer.go:377] "Caches are synced"
	I1219 02:35:47.878541       1 shared_informer.go:377] "Caches are synced"
	I1219 02:35:47.878502       1 shared_informer.go:377] "Caches are synced"
	E1219 02:35:58.983956       1 reflector.go:204] "Failed to watch" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1219 02:35:58.983975       1 reflector.go:204] "Failed to watch" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1219 02:35:58.984046       1 reflector.go:204] "Failed to watch" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1219 02:35:59.017679       1 reflector.go:204] "Failed to watch" err="statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1219 02:35:59.017625       1 reflector.go:204] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	
	
	==> kubelet <==
	Dec 19 02:36:50 functional-453239 kubelet[4896]: I1219 02:36:50.142655    4896 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubernetes-dashboard-kong-prefix-dir\" (UniqueName: \"kubernetes.io/empty-dir/081840db-ec2c-4925-a270-eac4e3c55961-kubernetes-dashboard-kong-prefix-dir\") pod \"kubernetes-dashboard-kong-78b7499b45-n429j\" (UID: \"081840db-ec2c-4925-a270-eac4e3c55961\") " pod="kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-n429j"
	Dec 19 02:36:50 functional-453239 kubelet[4896]: I1219 02:36:50.142697    4896 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubernetes-dashboard-kong-tmp\" (UniqueName: \"kubernetes.io/empty-dir/081840db-ec2c-4925-a270-eac4e3c55961-kubernetes-dashboard-kong-tmp\") pod \"kubernetes-dashboard-kong-78b7499b45-n429j\" (UID: \"081840db-ec2c-4925-a270-eac4e3c55961\") " pod="kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-n429j"
	Dec 19 02:36:50 functional-453239 kubelet[4896]: I1219 02:36:50.142736    4896 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/0ea556cd-cb1c-4117-a3b3-9e2dfdc9612c-tmp-volume\") pod \"kubernetes-dashboard-metrics-scraper-594bbfb84b-6hdd2\" (UID: \"0ea556cd-cb1c-4117-a3b3-9e2dfdc9612c\") " pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-6hdd2"
	Dec 19 02:36:50 functional-453239 kubelet[4896]: I1219 02:36:50.142797    4896 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a1d90ad8-fcb7-4bc8-b0d0-b07bd153ea62-tmp-volume\") pod \"kubernetes-dashboard-auth-fdf56577f-r7cx9\" (UID: \"a1d90ad8-fcb7-4bc8-b0d0-b07bd153ea62\") " pod="kubernetes-dashboard/kubernetes-dashboard-auth-fdf56577f-r7cx9"
	Dec 19 02:36:50 functional-453239 kubelet[4896]: I1219 02:36:50.142858    4896 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6wtj\" (UniqueName: \"kubernetes.io/projected/38abfc15-b37b-4404-b547-4813fcb6679b-kube-api-access-j6wtj\") pod \"kubernetes-dashboard-api-77fcfc9fc6-nmbkz\" (UID: \"38abfc15-b37b-4404-b547-4813fcb6679b\") " pod="kubernetes-dashboard/kubernetes-dashboard-api-77fcfc9fc6-nmbkz"
	Dec 19 02:36:50 functional-453239 kubelet[4896]: I1219 02:36:50.142887    4896 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kong-custom-dbless-config-volume\" (UniqueName: \"kubernetes.io/configmap/081840db-ec2c-4925-a270-eac4e3c55961-kong-custom-dbless-config-volume\") pod \"kubernetes-dashboard-kong-78b7499b45-n429j\" (UID: \"081840db-ec2c-4925-a270-eac4e3c55961\") " pod="kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-n429j"
	Dec 19 02:36:52 functional-453239 kubelet[4896]: I1219 02:36:52.910537    4896 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863352Ki","pods":"110"}
	Dec 19 02:36:52 functional-453239 kubelet[4896]: I1219 02:36:52.910659    4896 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863352Ki","pods":"110"}
	Dec 19 02:36:53 functional-453239 kubelet[4896]: I1219 02:36:53.130153    4896 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/hello-node-connect-9f67c86d4-gz2l2" podStartSLOduration=5.284387744 podStartE2EDuration="7.130129609s" podCreationTimestamp="2025-12-19 02:36:46 +0000 UTC" firstStartedPulling="2025-12-19 02:36:47.357857394 +0000 UTC m=+50.589747270" lastFinishedPulling="2025-12-19 02:36:49.203599326 +0000 UTC m=+52.435489135" observedRunningTime="2025-12-19 02:36:50.142252808 +0000 UTC m=+53.374142635" watchObservedRunningTime="2025-12-19 02:36:53.130129609 +0000 UTC m=+56.362019446"
	Dec 19 02:36:53 functional-453239 kubelet[4896]: I1219 02:36:53.130716    4896 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-auth-fdf56577f-r7cx9" podStartSLOduration=1.679053151 podStartE2EDuration="4.13070192s" podCreationTimestamp="2025-12-19 02:36:49 +0000 UTC" firstStartedPulling="2025-12-19 02:36:50.458263841 +0000 UTC m=+53.690153660" lastFinishedPulling="2025-12-19 02:36:52.909912608 +0000 UTC m=+56.141802429" observedRunningTime="2025-12-19 02:36:53.130062171 +0000 UTC m=+56.361951999" watchObservedRunningTime="2025-12-19 02:36:53.13070192 +0000 UTC m=+56.362591747"
	Dec 19 02:36:55 functional-453239 kubelet[4896]: I1219 02:36:55.321839    4896 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863352Ki","pods":"110"}
	Dec 19 02:36:55 functional-453239 kubelet[4896]: I1219 02:36:55.321938    4896 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863352Ki","pods":"110"}
	Dec 19 02:36:56 functional-453239 kubelet[4896]: E1219 02:36:56.131281    4896 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-6hdd2" containerName="kubernetes-dashboard-metrics-scraper"
	Dec 19 02:36:57 functional-453239 kubelet[4896]: E1219 02:36:57.133635    4896 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-6hdd2" containerName="kubernetes-dashboard-metrics-scraper"
	Dec 19 02:36:58 functional-453239 kubelet[4896]: I1219 02:36:58.039396    4896 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863352Ki","pods":"110"}
	Dec 19 02:36:58 functional-453239 kubelet[4896]: I1219 02:36:58.039466    4896 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863352Ki","pods":"110"}
	Dec 19 02:36:59 functional-453239 kubelet[4896]: I1219 02:36:59.156621    4896 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-6hdd2" podStartSLOduration=5.304881063 podStartE2EDuration="10.156600276s" podCreationTimestamp="2025-12-19 02:36:49 +0000 UTC" firstStartedPulling="2025-12-19 02:36:50.469476162 +0000 UTC m=+53.701365981" lastFinishedPulling="2025-12-19 02:36:55.321195373 +0000 UTC m=+58.553085194" observedRunningTime="2025-12-19 02:36:56.149635825 +0000 UTC m=+59.381525662" watchObservedRunningTime="2025-12-19 02:36:59.156600276 +0000 UTC m=+62.388490095"
	Dec 19 02:36:59 functional-453239 kubelet[4896]: I1219 02:36:59.157140    4896 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-api-77fcfc9fc6-nmbkz" podStartSLOduration=2.629985897 podStartE2EDuration="10.157126044s" podCreationTimestamp="2025-12-19 02:36:49 +0000 UTC" firstStartedPulling="2025-12-19 02:36:50.511622298 +0000 UTC m=+53.743512125" lastFinishedPulling="2025-12-19 02:36:58.038762438 +0000 UTC m=+61.270652272" observedRunningTime="2025-12-19 02:36:59.156368924 +0000 UTC m=+62.388258764" watchObservedRunningTime="2025-12-19 02:36:59.157126044 +0000 UTC m=+62.389015872"
	Dec 19 02:37:04 functional-453239 kubelet[4896]: E1219 02:37:04.163655    4896 prober_manager.go:209] "Readiness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-n429j" containerName="proxy"
	Dec 19 02:37:05 functional-453239 kubelet[4896]: E1219 02:37:05.168568    4896 prober_manager.go:209] "Readiness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-n429j" containerName="proxy"
	Dec 19 02:37:06 functional-453239 kubelet[4896]: E1219 02:37:06.174974    4896 prober_manager.go:209] "Readiness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-n429j" containerName="proxy"
	Dec 19 02:37:06 functional-453239 kubelet[4896]: I1219 02:37:06.190836    4896 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-n429j" podStartSLOduration=2.535735993 podStartE2EDuration="17.190821075s" podCreationTimestamp="2025-12-19 02:36:49 +0000 UTC" firstStartedPulling="2025-12-19 02:36:50.51412578 +0000 UTC m=+53.746015598" lastFinishedPulling="2025-12-19 02:37:05.169210862 +0000 UTC m=+68.401100680" observedRunningTime="2025-12-19 02:37:06.190046123 +0000 UTC m=+69.421935964" watchObservedRunningTime="2025-12-19 02:37:06.190821075 +0000 UTC m=+69.422710926"
	Dec 19 02:37:07 functional-453239 kubelet[4896]: E1219 02:37:07.178446    4896 prober_manager.go:209] "Readiness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-n429j" containerName="proxy"
	Dec 19 02:37:08 functional-453239 kubelet[4896]: I1219 02:37:08.109320    4896 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863352Ki","pods":"110"}
	Dec 19 02:37:08 functional-453239 kubelet[4896]: I1219 02:37:08.109399    4896 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863352Ki","pods":"110"}
	
	
	==> kubernetes-dashboard [104b3d9e0393ceb8225dee7755c8bcc888cf06d98ca1b1888071daa3fd65a516] <==
	I1219 02:37:08.327650       1 main.go:37] "Starting Kubernetes Dashboard Web" version="1.7.0"
	I1219 02:37:08.327708       1 init.go:48] Using in-cluster config
	I1219 02:37:08.327897       1 main.go:57] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> kubernetes-dashboard [242a586260b4f835d77b053ad8f8b109fd388692c1f67798c5ed0bf96dcca6a0] <==
	I1219 02:36:55.458284       1 main.go:43] "Starting Metrics Scraper" version="1.2.2"
	W1219 02:36:55.458366       1 client_config.go:667] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I1219 02:36:55.458478       1 main.go:51] Kubernetes host: https://10.96.0.1:443
	I1219 02:36:55.458487       1 main.go:52] Namespace(s): []
	10.244.0.1 - - [19/Dec/2025:02:36:58 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	
	
	==> kubernetes-dashboard [74f98b56a5d4dbe6084aadb4d25432cd0f9f23e98743d3fc0ec0d1f9507c4718] <==
	I1219 02:36:58.293959       1 main.go:40] "Starting Kubernetes Dashboard API" version="1.14.0"
	I1219 02:36:58.294037       1 init.go:49] Using in-cluster config
	I1219 02:36:58.294257       1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1219 02:36:58.294274       1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1219 02:36:58.294278       1 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1219 02:36:58.294281       1 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1219 02:36:58.299797       1 main.go:119] "Successful initial request to the apiserver" version="v1.35.0-rc.1"
	I1219 02:36:58.299828       1 client.go:265] Creating in-cluster Sidecar client
	I1219 02:36:58.303253       1 main.go:96] "Listening and serving on" address="0.0.0.0:8000"
	I1219 02:36:58.308125       1 manager.go:101] Successful request to sidecar
	
	
	==> kubernetes-dashboard [87d8168670a1cde5cc36711a9ae41d861b4f94aef7f6406cd94cc473c6e3affc] <==
	I1219 02:36:53.120819       1 main.go:34] "Starting Kubernetes Dashboard Auth" version="1.4.0"
	I1219 02:36:53.120921       1 init.go:49] Using in-cluster config
	I1219 02:36:53.121159       1 main.go:44] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> storage-provisioner [7da63cf537287a70657da5a81fd912283388810afcef53cb60937341918e73fb] <==
	W1219 02:36:45.680999       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:36:47.684294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:36:47.689048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:36:49.692738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:36:49.698084       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:36:51.701183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:36:51.705205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:36:53.708076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:36:53.712210       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:36:55.715550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:36:55.722390       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:36:57.725926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:36:57.870114       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:36:59.873658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:36:59.879054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:37:01.883357       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:37:01.888014       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:37:03.891063       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:37:03.895756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:37:05.899113       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:37:05.903503       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:37:07.907881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:37:07.912445       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:37:09.915941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:37:09.922200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [b82f571614b3a30ce86ecbd7e89b620bc0437b28be129ba42eca3453a66d3156] <==
	W1219 02:35:14.747540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:35:14.750707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1219 02:35:14.846276       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-453239_9097b8fc-b806-4cf6-8991-eadce1c0f79d!
	W1219 02:35:16.755191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:35:16.763809       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:35:18.768804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:35:18.773334       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:35:20.776851       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:35:20.781250       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:35:22.784254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:35:22.789716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:35:24.793482       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:35:24.798138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:35:26.801311       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:35:26.805345       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:35:28.809327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:35:28.813980       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:35:30.817333       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:35:30.823313       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:35:32.826892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:35:32.831129       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:35:34.835149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:35:34.839655       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:35:36.842979       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:35:36.848265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-453239 -n functional-453239
helpers_test.go:270: (dbg) Run:  kubectl --context functional-453239 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: busybox-mount
helpers_test.go:283: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context functional-453239 describe pod busybox-mount
helpers_test.go:291: (dbg) kubectl --context functional-453239 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-453239/192.168.49.2
	Start Time:       Fri, 19 Dec 2025 02:36:31 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  mount-munger:
	    Container ID:  containerd://49e6ce076c2dd86c7493b8f4eae9cf61f1eb6a10fda56426d9dcaf2359bb4cd6
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 19 Dec 2025 02:36:39 +0000
	      Finished:     Fri, 19 Dec 2025 02:36:39 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xjdb8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-xjdb8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  40s   default-scheduler  Successfully assigned default/busybox-mount to functional-453239
	  Normal  Pulling    39s   kubelet            spec.containers{mount-munger}: Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     32s   kubelet            spec.containers{mount-munger}: Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 3.163s (6.825s including waiting). Image size: 2395207 bytes.
	  Normal  Created    32s   kubelet            spec.containers{mount-munger}: Container created
	  Normal  Started    32s   kubelet            spec.containers{mount-munger}: Container started

                                                
                                                
-- /stdout --
helpers_test.go:294: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (26.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1219 03:06:15.862769  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/addons-367973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:06:19.191656  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-453239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:08:12.808775  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/addons-367973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:08:39.193433  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-180941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-002036 -n old-k8s-version-002036
start_stop_delete_test.go:272: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-12-19 03:14:23.167361721 +0000 UTC m=+2940.160485923
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-002036
helpers_test.go:244: (dbg) docker inspect old-k8s-version-002036:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9a09c191febd877b5a9d188d5a58ce6e4a4f355029b8660f49243998b1fd98b7",
	        "Created": "2025-12-19T03:03:20.787101116Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 566921,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-19T03:04:34.927054149Z",
	            "FinishedAt": "2025-12-19T03:04:34.001324871Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/9a09c191febd877b5a9d188d5a58ce6e4a4f355029b8660f49243998b1fd98b7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9a09c191febd877b5a9d188d5a58ce6e4a4f355029b8660f49243998b1fd98b7/hostname",
	        "HostsPath": "/var/lib/docker/containers/9a09c191febd877b5a9d188d5a58ce6e4a4f355029b8660f49243998b1fd98b7/hosts",
	        "LogPath": "/var/lib/docker/containers/9a09c191febd877b5a9d188d5a58ce6e4a4f355029b8660f49243998b1fd98b7/9a09c191febd877b5a9d188d5a58ce6e4a4f355029b8660f49243998b1fd98b7-json.log",
	        "Name": "/old-k8s-version-002036",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-002036:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-002036",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9a09c191febd877b5a9d188d5a58ce6e4a4f355029b8660f49243998b1fd98b7",
	                "LowerDir": "/var/lib/docker/overlay2/3805bfc7eeb1f171cb9e9dcde5558afa9342b710903f2547fe64c0b26d0ee151-init/diff:/var/lib/docker/overlay2/68e8325308c9e4650215fd35d4b00e1f54e6ac5929641a1bc8ed2d512448afbd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3805bfc7eeb1f171cb9e9dcde5558afa9342b710903f2547fe64c0b26d0ee151/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3805bfc7eeb1f171cb9e9dcde5558afa9342b710903f2547fe64c0b26d0ee151/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3805bfc7eeb1f171cb9e9dcde5558afa9342b710903f2547fe64c0b26d0ee151/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-002036",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-002036/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-002036",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-002036",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-002036",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "b1b417730d31ff86348063da70554761802e2e18a90604463935ed127c8d369f",
	            "SandboxKey": "/var/run/docker/netns/b1b417730d31",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-002036": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d0625f500f89a10f1e85ab1719542921f7a0ad6e299e9584edf6d3813be5348f",
	                    "EndpointID": "b90e4a6ec14c36b274745380d6bffb986a2ad735ad97876cecca6fc84bbde272",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "3a:c9:8b:db:76:54",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-002036",
	                        "9a09c191febd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-002036 -n old-k8s-version-002036
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-002036 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-002036 logs -n 25: (1.743837738s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ delete  │ -p cert-options-967008                                                                                                                                                                                                                              │ cert-options-967008          │ jenkins │ v1.37.0 │ 19 Dec 25 03:03 UTC │ 19 Dec 25 03:03 UTC │
	│ start   │ -p NoKubernetes-821572 --driver=docker  --container-runtime=containerd                                                                                                                                                                              │ NoKubernetes-821572          │ jenkins │ v1.37.0 │ 19 Dec 25 03:03 UTC │ 19 Dec 25 03:03 UTC │
	│ start   │ -p no-preload-208281 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-208281            │ jenkins │ v1.37.0 │ 19 Dec 25 03:03 UTC │ 19 Dec 25 03:04 UTC │
	│ delete  │ -p kubernetes-upgrade-340572                                                                                                                                                                                                                        │ kubernetes-upgrade-340572    │ jenkins │ v1.37.0 │ 19 Dec 25 03:03 UTC │ 19 Dec 25 03:03 UTC │
	│ start   │ -p embed-certs-536489 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                        │ embed-certs-536489           │ jenkins │ v1.37.0 │ 19 Dec 25 03:03 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p NoKubernetes-821572 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-821572          │ jenkins │ v1.37.0 │ 19 Dec 25 03:03 UTC │                     │
	│ delete  │ -p NoKubernetes-821572                                                                                                                                                                                                                              │ NoKubernetes-821572          │ jenkins │ v1.37.0 │ 19 Dec 25 03:03 UTC │ 19 Dec 25 03:03 UTC │
	│ delete  │ -p disable-driver-mounts-443690                                                                                                                                                                                                                     │ disable-driver-mounts-443690 │ jenkins │ v1.37.0 │ 19 Dec 25 03:03 UTC │ 19 Dec 25 03:03 UTC │
	│ start   │ -p default-k8s-diff-port-103644 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-103644 │ jenkins │ v1.37.0 │ 19 Dec 25 03:03 UTC │ 19 Dec 25 03:04 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-002036 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-002036       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ stop    │ -p old-k8s-version-002036 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-002036       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ addons  │ enable metrics-server -p embed-certs-536489 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ embed-certs-536489           │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ stop    │ -p embed-certs-536489 --alsologtostderr -v=3                                                                                                                                                                                                        │ embed-certs-536489           │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ addons  │ enable metrics-server -p no-preload-208281 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-208281            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ stop    │ -p no-preload-208281 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-208281            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-002036 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-002036       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p old-k8s-version-002036 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-002036       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-103644 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ default-k8s-diff-port-103644 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ stop    │ -p default-k8s-diff-port-103644 --alsologtostderr -v=3                                                                                                                                                                                              │ default-k8s-diff-port-103644 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ addons  │ enable dashboard -p embed-certs-536489 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-536489           │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p embed-certs-536489 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                        │ embed-certs-536489           │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:10 UTC │
	│ addons  │ enable dashboard -p no-preload-208281 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-208281            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p no-preload-208281 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-208281            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:10 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-103644 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ default-k8s-diff-port-103644 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p default-k8s-diff-port-103644 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-103644 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:11 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 03:04:50
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 03:04:50.472071  573699 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:04:50.472443  573699 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:04:50.472454  573699 out.go:374] Setting ErrFile to fd 2...
	I1219 03:04:50.472463  573699 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:04:50.473301  573699 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-253859/.minikube/bin
	I1219 03:04:50.474126  573699 out.go:368] Setting JSON to false
	I1219 03:04:50.476304  573699 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6429,"bootTime":1766107061,"procs":363,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 03:04:50.476440  573699 start.go:143] virtualization: kvm guest
	I1219 03:04:50.478144  573699 out.go:179] * [default-k8s-diff-port-103644] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 03:04:50.479945  573699 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 03:04:50.480003  573699 notify.go:221] Checking for updates...
	I1219 03:04:50.482332  573699 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 03:04:50.483901  573699 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-253859/kubeconfig
	I1219 03:04:50.485635  573699 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-253859/.minikube
	I1219 03:04:50.489602  573699 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 03:04:50.493460  573699 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 03:04:48.691145  569947 cli_runner.go:164] Run: docker network inspect no-preload-208281 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1219 03:04:48.711282  569947 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1219 03:04:48.716221  569947 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:04:48.729144  569947 kubeadm.go:884] updating cluster {Name:no-preload-208281 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-208281 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSi
ze:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 03:04:48.729324  569947 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1219 03:04:48.729375  569947 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:04:48.763109  569947 containerd.go:627] all images are preloaded for containerd runtime.
	I1219 03:04:48.763136  569947 cache_images.go:86] Images are preloaded, skipping loading
	I1219 03:04:48.763146  569947 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1219 03:04:48.763264  569947 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-208281 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-208281 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 03:04:48.763347  569947 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1219 03:04:48.796269  569947 cni.go:84] Creating CNI manager for ""
	I1219 03:04:48.796300  569947 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1219 03:04:48.796329  569947 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1219 03:04:48.796369  569947 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-208281 NodeName:no-preload-208281 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 03:04:48.796558  569947 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-208281"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 03:04:48.796669  569947 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1219 03:04:48.808026  569947 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 03:04:48.808102  569947 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 03:04:48.819240  569947 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1219 03:04:48.836384  569947 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1219 03:04:48.852550  569947 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2234 bytes)
	I1219 03:04:48.869275  569947 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1219 03:04:48.873704  569947 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:04:48.886490  569947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:04:48.994443  569947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:04:49.020494  569947 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/no-preload-208281 for IP: 192.168.85.2
	I1219 03:04:49.020518  569947 certs.go:195] generating shared ca certs ...
	I1219 03:04:49.020533  569947 certs.go:227] acquiring lock for ca certs: {Name:mk50e31410087b5c6cdb0986368a8c9100618403 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:04:49.020722  569947 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-253859/.minikube/ca.key
	I1219 03:04:49.020809  569947 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-253859/.minikube/proxy-client-ca.key
	I1219 03:04:49.020826  569947 certs.go:257] generating profile certs ...
	I1219 03:04:49.020975  569947 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/no-preload-208281/client.key
	I1219 03:04:49.021064  569947 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/no-preload-208281/apiserver.key.8f504093
	I1219 03:04:49.021159  569947 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/no-preload-208281/proxy-client.key
	I1219 03:04:49.021324  569947 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/257493.pem (1338 bytes)
	W1219 03:04:49.021373  569947 certs.go:480] ignoring /home/jenkins/minikube-integration/22230-253859/.minikube/certs/257493_empty.pem, impossibly tiny 0 bytes
	I1219 03:04:49.021389  569947 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca-key.pem (1679 bytes)
	I1219 03:04:49.021430  569947 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca.pem (1078 bytes)
	I1219 03:04:49.021457  569947 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/cert.pem (1123 bytes)
	I1219 03:04:49.021480  569947 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/key.pem (1675 bytes)
	I1219 03:04:49.021525  569947 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/files/etc/ssl/certs/2574932.pem (1708 bytes)
	I1219 03:04:49.022292  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 03:04:49.050958  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 03:04:49.072475  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 03:04:49.095867  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1219 03:04:49.124289  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/no-preload-208281/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1219 03:04:49.150664  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/no-preload-208281/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1219 03:04:49.188239  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/no-preload-208281/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 03:04:49.216791  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/no-preload-208281/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1219 03:04:49.242767  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 03:04:49.264732  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/certs/257493.pem --> /usr/share/ca-certificates/257493.pem (1338 bytes)
	I1219 03:04:49.286635  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/files/etc/ssl/certs/2574932.pem --> /usr/share/ca-certificates/2574932.pem (1708 bytes)
	I1219 03:04:49.313716  569947 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 03:04:49.329405  569947 ssh_runner.go:195] Run: openssl version
	I1219 03:04:49.337082  569947 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:04:49.347002  569947 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 03:04:49.355979  569947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:04:49.360975  569947 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:26 /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:04:49.361048  569947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:04:49.457547  569947 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 03:04:49.470846  569947 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/257493.pem
	I1219 03:04:49.484764  569947 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/257493.pem /etc/ssl/certs/257493.pem
	I1219 03:04:49.501564  569947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/257493.pem
	I1219 03:04:49.510435  569947 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 19 02:34 /usr/share/ca-certificates/257493.pem
	I1219 03:04:49.510523  569947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/257493.pem
	I1219 03:04:49.583657  569947 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1219 03:04:49.596341  569947 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2574932.pem
	I1219 03:04:49.615267  569947 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2574932.pem /etc/ssl/certs/2574932.pem
	I1219 03:04:49.637741  569947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2574932.pem
	I1219 03:04:49.651506  569947 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 19 02:34 /usr/share/ca-certificates/2574932.pem
	I1219 03:04:49.651606  569947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2574932.pem
	I1219 03:04:49.719393  569947 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1219 03:04:49.738446  569947 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 03:04:49.759885  569947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1219 03:04:49.839963  569947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1219 03:04:49.916940  569947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1219 03:04:49.984478  569947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1219 03:04:50.052790  569947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1219 03:04:50.213057  569947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1219 03:04:50.323267  569947 kubeadm.go:401] StartCluster: {Name:no-preload-208281 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-208281 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:04:50.323602  569947 cri.go:57] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1219 03:04:50.323919  569947 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:04:50.475134  569947 cri.go:92] found id: "cacf5e35e790ac46df5f1339f8e234ed8ffd764c954e85131b71bce9fc1293aa"
	I1219 03:04:50.475159  569947 cri.go:92] found id: "fe48441f2b926fe3ab51c21e04ab3ca395718e082d30ebb97fb87169ccd0e569"
	I1219 03:04:50.475166  569947 cri.go:92] found id: "e2b6de3f6ca9fb6a4cf59ba73f641b81403ff210fb542805c0ecfb406f229a1a"
	I1219 03:04:50.475171  569947 cri.go:92] found id: "496cc0b515ef5f8631302beefecc3380ae6563f4ed57ee1a286e52a06f30249c"
	I1219 03:04:50.475175  569947 cri.go:92] found id: "0cbaba368082a3f121bc09e60595d1ff592ec5796ecc2115579e6f149ade94d7"
	I1219 03:04:50.475180  569947 cri.go:92] found id: "1b139b90f72cc73cf0a391fb1b6dde88df245b3d92b6a686104996e14c38330c"
	I1219 03:04:50.475184  569947 cri.go:92] found id: "6bee3b8cfdfc0cfbd9e189118939b5349e2c5e27938c7584f8e1081b62329aa5"
	I1219 03:04:50.475506  569947 cri.go:92] found id: "6647bd08b2c7d47c6eead48e5e683e5121b40b05c0fe31a31cf2329b794cf45e"
	I1219 03:04:50.475532  569947 cri.go:92] found id: "0457ac1d0e6da6c1cc69c3583af9c20d14f2274fefa0497dcdce311fa1b7a1d9"
	I1219 03:04:50.475549  569947 cri.go:92] found id: "7dd5f1a15d9551e729b3a73b8dd9b36565eb7c597d451995076e1f6606444459"
	I1219 03:04:50.475553  569947 cri.go:92] found id: "06cb2742e807f0ab357f122a308cc5bad433f366487ddbbf67177cc3d6f74e2b"
	I1219 03:04:50.475557  569947 cri.go:92] found id: "ee999ba4f0b47eadf10730be5384ab5f3b45f01128186cb9ab42b8df5c0b7400"
	I1219 03:04:50.475562  569947 cri.go:92] found id: ""
	I1219 03:04:50.475632  569947 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1219 03:04:50.558499  569947 cri.go:119] JSON = [{"ociVersion":"1.2.1","id":"2379cbb88b4433650ade336b6320fe20a8e4907825957318999810a514165e6e","pid":805,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2379cbb88b4433650ade336b6320fe20a8e4907825957318999810a514165e6e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2379cbb88b4433650ade336b6320fe20a8e4907825957318999810a514165e6e/rootfs","created":"2025-12-19T03:04:49.720787385Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"2379cbb88b4433650ade336b6320fe20a8e4907825957318999810a514165e6e","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-no-preload-208281_355754afcd0ce2d7bab6c853c60e836c","io.kubernetes.cri.sandbox-memor
y":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-no-preload-208281","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"355754afcd0ce2d7bab6c853c60e836c"},"owner":"root"},{"ociVersion":"1.2.1","id":"38931718ec045783470e9a1d913a9a9c05077db3a90703385f4fb4a1667664f2","pid":857,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/38931718ec045783470e9a1d913a9a9c05077db3a90703385f4fb4a1667664f2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/38931718ec045783470e9a1d913a9a9c05077db3a90703385f4fb4a1667664f2/rootfs","created":"2025-12-19T03:04:49.778097457Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"38931718ec045783470e9a1d913a9a9c05077db3a90703385f4fb4a1667664f2","io.kubernetes.c
ri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-no-preload-208281_e43ae2e7891eaa1ff806e636f311fb81","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-no-preload-208281","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"e43ae2e7891eaa1ff806e636f311fb81"},"owner":"root"},{"ociVersion":"1.2.1","id":"46efefa83a3c7ef9fc0acf51455ccd0f9b6e6fce80a57e43de82b11915e2ee07","pid":838,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/46efefa83a3c7ef9fc0acf51455ccd0f9b6e6fce80a57e43de82b11915e2ee07","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/46efefa83a3c7ef9fc0acf51455ccd0f9b6e6fce80a57e43de82b11915e2ee07/rootfs","created":"2025-12-19T03:04:49.777265025Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kub
ernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"46efefa83a3c7ef9fc0acf51455ccd0f9b6e6fce80a57e43de82b11915e2ee07","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-no-preload-208281_80442131b1359e6657f2959b40f80467","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-no-preload-208281","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"80442131b1359e6657f2959b40f80467"},"owner":"root"},{"ociVersion":"1.2.1","id":"496cc0b515ef5f8631302beefecc3380ae6563f4ed57ee1a286e52a06f30249c","pid":902,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/496cc0b515ef5f8631302beefecc3380ae6563f4ed57ee1a286e52a06f30249c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/496cc0b515ef5f8631302beefecc3380ae6563f4ed57ee1a286e52a06f30249c/rootfs","created":"2025-12-19T03:04:49.944110218Z","annotations":{"io.kubernetes.cri.container-name":"kube-apis
erver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.35.0-rc.1","io.kubernetes.cri.sandbox-id":"2379cbb88b4433650ade336b6320fe20a8e4907825957318999810a514165e6e","io.kubernetes.cri.sandbox-name":"kube-apiserver-no-preload-208281","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"355754afcd0ce2d7bab6c853c60e836c"},"owner":"root"},{"ociVersion":"1.2.1","id":"7d8e57ec3badf4087436cc732a67ce490fa8a67cdaa8f122d22d1d2973dd4db3","pid":845,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7d8e57ec3badf4087436cc732a67ce490fa8a67cdaa8f122d22d1d2973dd4db3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7d8e57ec3badf4087436cc732a67ce490fa8a67cdaa8f122d22d1d2973dd4db3/rootfs","created":"2025-12-19T03:04:49.76636358Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-c
pu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"7d8e57ec3badf4087436cc732a67ce490fa8a67cdaa8f122d22d1d2973dd4db3","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-no-preload-208281_93a9992ff7a9c41e489b493737b5b488","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-no-preload-208281","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"93a9992ff7a9c41e489b493737b5b488"},"owner":"root"},{"ociVersion":"1.2.1","id":"cacf5e35e790ac46df5f1339f8e234ed8ffd764c954e85131b71bce9fc1293aa","pid":964,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cacf5e35e790ac46df5f1339f8e234ed8ffd764c954e85131b71bce9fc1293aa","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cacf5e35e790ac46df5f1339f8e234ed8ffd764c954e85131b71bce9fc1293aa/rootfs","created":"2025-12-19T03:04:50.065275653Z","annotations":{"io.kubernetes
.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.35.0-rc.1","io.kubernetes.cri.sandbox-id":"38931718ec045783470e9a1d913a9a9c05077db3a90703385f4fb4a1667664f2","io.kubernetes.cri.sandbox-name":"kube-scheduler-no-preload-208281","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"e43ae2e7891eaa1ff806e636f311fb81"},"owner":"root"},{"ociVersion":"1.2.1","id":"e2b6de3f6ca9fb6a4cf59ba73f641b81403ff210fb542805c0ecfb406f229a1a","pid":928,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e2b6de3f6ca9fb6a4cf59ba73f641b81403ff210fb542805c0ecfb406f229a1a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e2b6de3f6ca9fb6a4cf59ba73f641b81403ff210fb542805c0ecfb406f229a1a/rootfs","created":"2025-12-19T03:04:50.024946214Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-
name":"registry.k8s.io/etcd:3.6.6-0","io.kubernetes.cri.sandbox-id":"7d8e57ec3badf4087436cc732a67ce490fa8a67cdaa8f122d22d1d2973dd4db3","io.kubernetes.cri.sandbox-name":"etcd-no-preload-208281","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"93a9992ff7a9c41e489b493737b5b488"},"owner":"root"},{"ociVersion":"1.2.1","id":"fe48441f2b926fe3ab51c21e04ab3ca395718e082d30ebb97fb87169ccd0e569","pid":979,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe48441f2b926fe3ab51c21e04ab3ca395718e082d30ebb97fb87169ccd0e569","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe48441f2b926fe3ab51c21e04ab3ca395718e082d30ebb97fb87169ccd0e569/rootfs","created":"2025-12-19T03:04:50.153274168Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.35.0-rc.1","io.kubernetes.cri.sandbox-id":"46efefa83a3c7ef9fc0acf5
1455ccd0f9b6e6fce80a57e43de82b11915e2ee07","io.kubernetes.cri.sandbox-name":"kube-controller-manager-no-preload-208281","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"80442131b1359e6657f2959b40f80467"},"owner":"root"}]
	I1219 03:04:50.559253  569947 cri.go:129] list returned 8 containers
	I1219 03:04:50.559288  569947 cri.go:132] container: {ID:2379cbb88b4433650ade336b6320fe20a8e4907825957318999810a514165e6e Status:running}
	I1219 03:04:50.559310  569947 cri.go:134] skipping 2379cbb88b4433650ade336b6320fe20a8e4907825957318999810a514165e6e - not in ps
	I1219 03:04:50.559318  569947 cri.go:132] container: {ID:38931718ec045783470e9a1d913a9a9c05077db3a90703385f4fb4a1667664f2 Status:running}
	I1219 03:04:50.559326  569947 cri.go:134] skipping 38931718ec045783470e9a1d913a9a9c05077db3a90703385f4fb4a1667664f2 - not in ps
	I1219 03:04:50.559332  569947 cri.go:132] container: {ID:46efefa83a3c7ef9fc0acf51455ccd0f9b6e6fce80a57e43de82b11915e2ee07 Status:running}
	I1219 03:04:50.559338  569947 cri.go:134] skipping 46efefa83a3c7ef9fc0acf51455ccd0f9b6e6fce80a57e43de82b11915e2ee07 - not in ps
	I1219 03:04:50.559343  569947 cri.go:132] container: {ID:496cc0b515ef5f8631302beefecc3380ae6563f4ed57ee1a286e52a06f30249c Status:running}
	I1219 03:04:50.559363  569947 cri.go:138] skipping {496cc0b515ef5f8631302beefecc3380ae6563f4ed57ee1a286e52a06f30249c running}: state = "running", want "paused"
	I1219 03:04:50.559373  569947 cri.go:132] container: {ID:7d8e57ec3badf4087436cc732a67ce490fa8a67cdaa8f122d22d1d2973dd4db3 Status:running}
	I1219 03:04:50.559381  569947 cri.go:134] skipping 7d8e57ec3badf4087436cc732a67ce490fa8a67cdaa8f122d22d1d2973dd4db3 - not in ps
	I1219 03:04:50.559386  569947 cri.go:132] container: {ID:cacf5e35e790ac46df5f1339f8e234ed8ffd764c954e85131b71bce9fc1293aa Status:running}
	I1219 03:04:50.559393  569947 cri.go:138] skipping {cacf5e35e790ac46df5f1339f8e234ed8ffd764c954e85131b71bce9fc1293aa running}: state = "running", want "paused"
	I1219 03:04:50.559400  569947 cri.go:132] container: {ID:e2b6de3f6ca9fb6a4cf59ba73f641b81403ff210fb542805c0ecfb406f229a1a Status:running}
	I1219 03:04:50.559406  569947 cri.go:138] skipping {e2b6de3f6ca9fb6a4cf59ba73f641b81403ff210fb542805c0ecfb406f229a1a running}: state = "running", want "paused"
	I1219 03:04:50.559412  569947 cri.go:132] container: {ID:fe48441f2b926fe3ab51c21e04ab3ca395718e082d30ebb97fb87169ccd0e569 Status:running}
	I1219 03:04:50.559419  569947 cri.go:138] skipping {fe48441f2b926fe3ab51c21e04ab3ca395718e082d30ebb97fb87169ccd0e569 running}: state = "running", want "paused"
	I1219 03:04:50.559472  569947 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1219 03:04:50.576564  569947 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1219 03:04:50.576683  569947 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1219 03:04:50.576777  569947 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1219 03:04:50.600225  569947 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1219 03:04:50.601759  569947 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-208281" does not appear in /home/jenkins/minikube-integration/22230-253859/kubeconfig
	I1219 03:04:50.605721  569947 kubeconfig.go:62] /home/jenkins/minikube-integration/22230-253859/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-208281" cluster setting kubeconfig missing "no-preload-208281" context setting]
	I1219 03:04:50.610686  569947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-253859/kubeconfig: {Name:mk882428f840659847d4e22cffee2d7775067610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:04:50.613392  569947 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1219 03:04:50.647032  569947 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1219 03:04:50.647196  569947 kubeadm.go:602] duration metric: took 70.481994ms to restartPrimaryControlPlane
	I1219 03:04:50.647478  569947 kubeadm.go:403] duration metric: took 324.224528ms to StartCluster
	I1219 03:04:50.647573  569947 settings.go:142] acquiring lock: {Name:mkabb1ebf75b28a37c7b2b053110889b555ff453 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:04:50.647991  569947 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-253859/kubeconfig
	I1219 03:04:50.652215  569947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-253859/kubeconfig: {Name:mk882428f840659847d4e22cffee2d7775067610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:04:50.652837  569947 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:04:50.652966  569947 addons.go:70] Setting storage-provisioner=true in profile "no-preload-208281"
	I1219 03:04:50.652984  569947 addons.go:239] Setting addon storage-provisioner=true in "no-preload-208281"
	W1219 03:04:50.652993  569947 addons.go:248] addon storage-provisioner should already be in state true
	I1219 03:04:50.653027  569947 host.go:66] Checking if "no-preload-208281" exists ...
	I1219 03:04:50.653048  569947 config.go:182] Loaded profile config "no-preload-208281": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1219 03:04:50.653120  569947 addons.go:70] Setting default-storageclass=true in profile "no-preload-208281"
	I1219 03:04:50.653135  569947 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-208281"
	I1219 03:04:50.653460  569947 cli_runner.go:164] Run: docker container inspect no-preload-208281 --format={{.State.Status}}
	I1219 03:04:50.653534  569947 cli_runner.go:164] Run: docker container inspect no-preload-208281 --format={{.State.Status}}
	I1219 03:04:50.655588  569947 addons.go:70] Setting metrics-server=true in profile "no-preload-208281"
	I1219 03:04:50.655611  569947 addons.go:239] Setting addon metrics-server=true in "no-preload-208281"
	W1219 03:04:50.655621  569947 addons.go:248] addon metrics-server should already be in state true
	I1219 03:04:50.655656  569947 host.go:66] Checking if "no-preload-208281" exists ...
	I1219 03:04:50.656118  569947 cli_runner.go:164] Run: docker container inspect no-preload-208281 --format={{.State.Status}}
	I1219 03:04:50.656525  569947 addons.go:70] Setting dashboard=true in profile "no-preload-208281"
	I1219 03:04:50.656563  569947 addons.go:239] Setting addon dashboard=true in "no-preload-208281"
	W1219 03:04:50.656574  569947 addons.go:248] addon dashboard should already be in state true
	I1219 03:04:50.656622  569947 host.go:66] Checking if "no-preload-208281" exists ...
	I1219 03:04:50.657316  569947 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1219 03:04:50.657617  569947 cli_runner.go:164] Run: docker container inspect no-preload-208281 --format={{.State.Status}}
	I1219 03:04:50.660722  569947 out.go:179] * Verifying Kubernetes components...
	I1219 03:04:50.661854  569947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:04:50.707508  569947 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1219 03:04:50.708775  569947 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1219 03:04:50.708812  569947 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1219 03:04:50.708834  569947 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:04:50.495202  573699 config.go:182] Loaded profile config "default-k8s-diff-port-103644": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 03:04:50.495941  573699 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 03:04:50.539840  573699 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 03:04:50.540119  573699 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:04:50.710990  573699 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-19 03:04:50.671412726 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:04:50.711217  573699 docker.go:319] overlay module found
	I1219 03:04:50.713697  573699 out.go:179] * Using the docker driver based on existing profile
	I1219 03:04:50.714949  573699 start.go:309] selected driver: docker
	I1219 03:04:50.714970  573699 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-103644 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-103644 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:04:50.715089  573699 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 03:04:50.716020  573699 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:04:50.884011  573699 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-19 03:04:50.859280212 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:04:50.884478  573699 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:04:50.884531  573699 cni.go:84] Creating CNI manager for ""
	I1219 03:04:50.884789  573699 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1219 03:04:50.884940  573699 start.go:353] cluster config:
	{Name:default-k8s-diff-port-103644 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-103644 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:04:50.887403  573699 out.go:179] * Starting "default-k8s-diff-port-103644" primary control-plane node in "default-k8s-diff-port-103644" cluster
	I1219 03:04:50.888689  573699 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1219 03:04:50.889896  573699 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1219 03:04:50.891030  573699 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1219 03:04:50.891092  573699 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-253859/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-amd64.tar.lz4
	I1219 03:04:50.891106  573699 cache.go:65] Caching tarball of preloaded images
	I1219 03:04:50.891194  573699 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1219 03:04:50.891211  573699 preload.go:238] Found /home/jenkins/minikube-integration/22230-253859/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1219 03:04:50.891221  573699 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on containerd
	I1219 03:04:50.891356  573699 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/config.json ...
	I1219 03:04:50.932991  573699 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1219 03:04:50.933024  573699 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1219 03:04:50.933040  573699 cache.go:243] Successfully downloaded all kic artifacts
	I1219 03:04:50.933079  573699 start.go:360] acquireMachinesLock for default-k8s-diff-port-103644: {Name:mk39933c40de3c92aeeb6b9d20d3c90e6af0f1fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:04:50.933158  573699 start.go:364] duration metric: took 48.804µs to acquireMachinesLock for "default-k8s-diff-port-103644"
	I1219 03:04:50.933177  573699 start.go:96] Skipping create...Using existing machine configuration
	I1219 03:04:50.933183  573699 fix.go:54] fixHost starting: 
	I1219 03:04:50.933489  573699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103644 --format={{.State.Status}}
	I1219 03:04:50.973427  573699 fix.go:112] recreateIfNeeded on default-k8s-diff-port-103644: state=Stopped err=<nil>
	W1219 03:04:50.973619  573699 fix.go:138] unexpected machine state, will restart: <nil>
	I1219 03:04:50.748260  566718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (4.195228143s)
	I1219 03:04:50.748361  566718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:04:51.828106  566718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml: (1.079706419s)
	I1219 03:04:51.828277  566718 addons.go:500] Verifying addon dashboard=true in "old-k8s-version-002036"
	I1219 03:04:51.828773  566718 cli_runner.go:164] Run: docker container inspect old-k8s-version-002036 --format={{.State.Status}}
	I1219 03:04:51.856291  566718 out.go:179] * Verifying dashboard addon...
	I1219 03:04:50.708886  569947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208281
	I1219 03:04:50.709108  569947 addons.go:239] Setting addon default-storageclass=true in "no-preload-208281"
	W1219 03:04:50.709132  569947 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:04:50.709161  569947 host.go:66] Checking if "no-preload-208281" exists ...
	I1219 03:04:50.709725  569947 cli_runner.go:164] Run: docker container inspect no-preload-208281 --format={{.State.Status}}
	I1219 03:04:50.710101  569947 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:04:50.710123  569947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:04:50.710173  569947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208281
	I1219 03:04:50.716696  569947 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:04:50.716718  569947 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:04:50.716777  569947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208281
	I1219 03:04:50.770714  569947 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:04:50.770743  569947 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:04:50.770811  569947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208281
	I1219 03:04:50.772323  569947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/no-preload-208281/id_rsa Username:docker}
	I1219 03:04:50.774548  569947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/no-preload-208281/id_rsa Username:docker}
	I1219 03:04:50.782771  569947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/no-preload-208281/id_rsa Username:docker}
	I1219 03:04:50.818125  569947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/no-preload-208281/id_rsa Username:docker}
	I1219 03:04:50.922492  569947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:04:50.961986  569947 node_ready.go:35] waiting up to 6m0s for node "no-preload-208281" to be "Ready" ...
	I1219 03:04:50.964889  569947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:04:50.991305  569947 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1219 03:04:50.991337  569947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1219 03:04:50.997863  569947 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:04:51.029470  569947 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1219 03:04:51.029507  569947 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1219 03:04:51.077218  569947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:04:51.083520  569947 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:04:51.083552  569947 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1219 03:04:51.107276  569947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:04:52.474618  569947 node_ready.go:49] node "no-preload-208281" is "Ready"
	I1219 03:04:52.474662  569947 node_ready.go:38] duration metric: took 1.512481187s for node "no-preload-208281" to be "Ready" ...
	I1219 03:04:52.474682  569947 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:04:52.474743  569947 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:04:51.142743  568301 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.559306992s)
	I1219 03:04:51.142940  568301 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.499593696s)
	I1219 03:04:51.143060  568301 addons.go:500] Verifying addon metrics-server=true in "embed-certs-536489"
	I1219 03:04:51.143722  568301 cli_runner.go:164] Run: docker container inspect embed-certs-536489 --format={{.State.Status}}
	I1219 03:04:51.144038  568301 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.580066034s)
	I1219 03:04:52.990446  568301 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.445475643s)
	I1219 03:04:52.990490  568301 api_server.go:72] duration metric: took 5.685402741s to wait for apiserver process to appear ...
	I1219 03:04:52.990498  568301 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:04:52.990528  568301 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1219 03:04:52.992275  568301 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (4.373532841s)
	I1219 03:04:52.992364  568301 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:04:53.002104  568301 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1219 03:04:53.006331  568301 api_server.go:141] control plane version: v1.34.3
	I1219 03:04:53.006385  568301 api_server.go:131] duration metric: took 15.878835ms to wait for apiserver health ...
	I1219 03:04:53.006399  568301 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:04:53.016977  568301 system_pods.go:59] 9 kube-system pods found
	I1219 03:04:53.017141  568301 system_pods.go:61] "coredns-66bc5c9577-qmb9z" [dd0dceb8-d48d-4215-82f5-df001a8ffe5f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:04:53.017157  568301 system_pods.go:61] "etcd-embed-certs-536489" [b3cbe090-1470-477e-87da-d93ca2bf3394] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:04:53.017165  568301 system_pods.go:61] "kindnet-kzlhv" [2a4d0c65-8aff-4b2f-bb3d-d79b89f560ca] Running
	I1219 03:04:53.017184  568301 system_pods.go:61] "kube-apiserver-embed-certs-536489" [18c7bfaa-73a6-457a-9a58-05d2ffa0de1c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:04:53.017193  568301 system_pods.go:61] "kube-controller-manager-embed-certs-536489" [d657289e-8fd1-4ed3-94c2-194aa95545f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:04:53.017199  568301 system_pods.go:61] "kube-proxy-qhlhx" [bc7f26c2-aed8-4540-bd1f-0ee0b1974137] Running
	I1219 03:04:53.017212  568301 system_pods.go:61] "kube-scheduler-embed-certs-536489" [72b72681-cda6-48b6-9f43-9c9b125883b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:04:53.017219  568301 system_pods.go:61] "metrics-server-746fcd58dc-8458x" [47114157-df98-40be-815f-7437499ca215] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:04:53.017225  568301 system_pods.go:61] "storage-provisioner" [51c90b41-88a3-4279-84d8-13a52b7ef246] Running
	I1219 03:04:53.017233  568301 system_pods.go:74] duration metric: took 10.826754ms to wait for pod list to return data ...
	I1219 03:04:53.017244  568301 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:04:53.020879  568301 default_sa.go:45] found service account: "default"
	I1219 03:04:53.020911  568301 default_sa.go:55] duration metric: took 3.659738ms for default service account to be created ...
	I1219 03:04:53.020925  568301 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:04:53.118092  568301 system_pods.go:86] 9 kube-system pods found
	I1219 03:04:53.118237  568301 system_pods.go:89] "coredns-66bc5c9577-qmb9z" [dd0dceb8-d48d-4215-82f5-df001a8ffe5f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:04:53.118277  568301 system_pods.go:89] "etcd-embed-certs-536489" [b3cbe090-1470-477e-87da-d93ca2bf3394] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:04:53.118286  568301 system_pods.go:89] "kindnet-kzlhv" [2a4d0c65-8aff-4b2f-bb3d-d79b89f560ca] Running
	I1219 03:04:53.118334  568301 system_pods.go:89] "kube-apiserver-embed-certs-536489" [18c7bfaa-73a6-457a-9a58-05d2ffa0de1c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:04:53.118346  568301 system_pods.go:89] "kube-controller-manager-embed-certs-536489" [d657289e-8fd1-4ed3-94c2-194aa95545f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:04:53.118360  568301 system_pods.go:89] "kube-proxy-qhlhx" [bc7f26c2-aed8-4540-bd1f-0ee0b1974137] Running
	I1219 03:04:53.118368  568301 system_pods.go:89] "kube-scheduler-embed-certs-536489" [72b72681-cda6-48b6-9f43-9c9b125883b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:04:53.118508  568301 system_pods.go:89] "metrics-server-746fcd58dc-8458x" [47114157-df98-40be-815f-7437499ca215] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:04:53.118523  568301 system_pods.go:89] "storage-provisioner" [51c90b41-88a3-4279-84d8-13a52b7ef246] Running
	I1219 03:04:53.118535  568301 system_pods.go:126] duration metric: took 97.602528ms to wait for k8s-apps to be running ...
	I1219 03:04:53.118546  568301 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:04:53.118629  568301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:04:53.213539  568301 addons.go:500] Verifying addon dashboard=true in "embed-certs-536489"
	I1219 03:04:53.213985  568301 cli_runner.go:164] Run: docker container inspect embed-certs-536489 --format={{.State.Status}}
	I1219 03:04:53.214117  568301 system_svc.go:56] duration metric: took 95.561896ms WaitForService to wait for kubelet
	I1219 03:04:53.214162  568301 kubeadm.go:587] duration metric: took 5.909072172s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:04:53.214187  568301 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:04:53.220086  568301 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:04:53.220122  568301 node_conditions.go:123] node cpu capacity is 8
	I1219 03:04:53.220143  568301 node_conditions.go:105] duration metric: took 5.94983ms to run NodePressure ...
	I1219 03:04:53.220159  568301 start.go:242] waiting for startup goroutines ...
	I1219 03:04:53.239792  568301 out.go:179] * Verifying dashboard addon...
	I1219 03:04:51.859124  566718 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:04:51.862362  566718 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:04:53.241980  568301 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:04:53.245176  568301 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:04:53.747449  568301 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:04:53.747476  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:54.245867  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:54.747323  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:50.976005  573699 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-103644" ...
	I1219 03:04:50.976124  573699 cli_runner.go:164] Run: docker start default-k8s-diff-port-103644
	I1219 03:04:51.482862  573699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103644 --format={{.State.Status}}
	I1219 03:04:51.514418  573699 kic.go:430] container "default-k8s-diff-port-103644" state is running.
	I1219 03:04:51.515091  573699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-103644
	I1219 03:04:51.545304  573699 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/config.json ...
	I1219 03:04:51.545913  573699 machine.go:94] provisionDockerMachine start ...
	I1219 03:04:51.546012  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:51.578064  573699 main.go:144] libmachine: Using SSH client type: native
	I1219 03:04:51.578471  573699 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1219 03:04:51.578526  573699 main.go:144] libmachine: About to run SSH command:
	hostname
	I1219 03:04:51.580615  573699 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46348->127.0.0.1:33098: read: connection reset by peer
	I1219 03:04:54.740022  573699 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-103644
	
	I1219 03:04:54.740053  573699 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-103644"
	I1219 03:04:54.740121  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:54.764557  573699 main.go:144] libmachine: Using SSH client type: native
	I1219 03:04:54.764812  573699 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1219 03:04:54.764832  573699 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-103644 && echo "default-k8s-diff-port-103644" | sudo tee /etc/hostname
	I1219 03:04:54.940991  573699 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-103644
	
	I1219 03:04:54.941090  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:54.961163  573699 main.go:144] libmachine: Using SSH client type: native
	I1219 03:04:54.961447  573699 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1219 03:04:54.961472  573699 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-103644' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-103644/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-103644' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 03:04:55.112211  573699 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:04:55.112238  573699 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22230-253859/.minikube CaCertPath:/home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22230-253859/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22230-253859/.minikube}
	I1219 03:04:55.112272  573699 ubuntu.go:190] setting up certificates
	I1219 03:04:55.112285  573699 provision.go:84] configureAuth start
	I1219 03:04:55.112354  573699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-103644
	I1219 03:04:55.131633  573699 provision.go:143] copyHostCerts
	I1219 03:04:55.131701  573699 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-253859/.minikube/ca.pem, removing ...
	I1219 03:04:55.131722  573699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-253859/.minikube/ca.pem
	I1219 03:04:55.131814  573699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22230-253859/.minikube/ca.pem (1078 bytes)
	I1219 03:04:55.131992  573699 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-253859/.minikube/cert.pem, removing ...
	I1219 03:04:55.132009  573699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-253859/.minikube/cert.pem
	I1219 03:04:55.132066  573699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22230-253859/.minikube/cert.pem (1123 bytes)
	I1219 03:04:55.132178  573699 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-253859/.minikube/key.pem, removing ...
	I1219 03:04:55.132189  573699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-253859/.minikube/key.pem
	I1219 03:04:55.132230  573699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22230-253859/.minikube/key.pem (1675 bytes)
	I1219 03:04:55.132339  573699 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22230-253859/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-103644 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-103644 localhost minikube]
	I1219 03:04:55.201421  573699 provision.go:177] copyRemoteCerts
	I1219 03:04:55.201486  573699 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 03:04:55.201545  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:55.220254  573699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/default-k8s-diff-port-103644/id_rsa Username:docker}
	I1219 03:04:55.324809  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1219 03:04:55.344299  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1219 03:04:55.364633  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1219 03:04:55.383945  573699 provision.go:87] duration metric: took 271.644189ms to configureAuth
	I1219 03:04:55.383975  573699 ubuntu.go:206] setting minikube options for container-runtime
	I1219 03:04:55.384174  573699 config.go:182] Loaded profile config "default-k8s-diff-port-103644": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 03:04:55.384190  573699 machine.go:97] duration metric: took 3.838258422s to provisionDockerMachine
	I1219 03:04:55.384201  573699 start.go:293] postStartSetup for "default-k8s-diff-port-103644" (driver="docker")
	I1219 03:04:55.384218  573699 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 03:04:55.384292  573699 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 03:04:55.384363  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:55.402689  573699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/default-k8s-diff-port-103644/id_rsa Username:docker}
	I1219 03:04:55.509385  573699 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 03:04:55.513698  573699 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1219 03:04:55.513738  573699 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1219 03:04:55.513752  573699 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-253859/.minikube/addons for local assets ...
	I1219 03:04:55.513809  573699 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-253859/.minikube/files for local assets ...
	I1219 03:04:55.513923  573699 filesync.go:149] local asset: /home/jenkins/minikube-integration/22230-253859/.minikube/files/etc/ssl/certs/2574932.pem -> 2574932.pem in /etc/ssl/certs
	I1219 03:04:55.514061  573699 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 03:04:55.522610  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/files/etc/ssl/certs/2574932.pem --> /etc/ssl/certs/2574932.pem (1708 bytes)
	I1219 03:04:55.542136  573699 start.go:296] duration metric: took 157.911131ms for postStartSetup
	I1219 03:04:55.542235  573699 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 03:04:55.542278  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:55.560317  573699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/default-k8s-diff-port-103644/id_rsa Username:docker}
	I1219 03:04:55.676892  573699 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1219 03:04:55.683207  573699 fix.go:56] duration metric: took 4.75001221s for fixHost
	I1219 03:04:55.683240  573699 start.go:83] releasing machines lock for "default-k8s-diff-port-103644", held for 4.750073001s
	I1219 03:04:55.683337  573699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-103644
	I1219 03:04:55.706632  573699 ssh_runner.go:195] Run: cat /version.json
	I1219 03:04:55.706696  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:55.706708  573699 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 03:04:55.706796  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:55.729248  573699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/default-k8s-diff-port-103644/id_rsa Username:docker}
	I1219 03:04:55.729555  573699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/default-k8s-diff-port-103644/id_rsa Username:docker}
	I1219 03:04:55.832375  573699 ssh_runner.go:195] Run: systemctl --version
	I1219 03:04:55.888761  573699 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 03:04:55.894089  573699 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 03:04:55.894170  573699 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 03:04:55.902973  573699 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1219 03:04:55.903001  573699 start.go:496] detecting cgroup driver to use...
	I1219 03:04:55.903039  573699 detect.go:190] detected "systemd" cgroup driver on host os
	I1219 03:04:55.903123  573699 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1219 03:04:55.924413  573699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1219 03:04:55.939247  573699 docker.go:218] disabling cri-docker service (if available) ...
	I1219 03:04:55.939312  573699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1219 03:04:55.955848  573699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1219 03:04:55.970636  573699 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1219 03:04:56.060548  573699 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1219 03:04:56.151469  573699 docker.go:234] disabling docker service ...
	I1219 03:04:56.151544  573699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1219 03:04:56.168733  573699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1219 03:04:56.183785  573699 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1219 03:04:56.269923  573699 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1219 03:04:56.358410  573699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1219 03:04:56.374184  573699 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 03:04:56.391509  573699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1219 03:04:56.403885  573699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1219 03:04:56.418704  573699 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1219 03:04:56.418843  573699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1219 03:04:56.432502  573699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1219 03:04:56.446280  573699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1219 03:04:56.458732  573699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1219 03:04:56.471691  573699 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 03:04:56.482737  573699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1219 03:04:56.494667  573699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1219 03:04:56.507284  573699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1219 03:04:56.520174  573699 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 03:04:56.530768  573699 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 03:04:56.541170  573699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:04:56.646657  573699 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1219 03:04:56.781992  573699 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1219 03:04:56.782112  573699 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1219 03:04:56.788198  573699 start.go:564] Will wait 60s for crictl version
	I1219 03:04:56.788285  573699 ssh_runner.go:195] Run: which crictl
	I1219 03:04:56.793113  573699 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1219 03:04:56.836402  573699 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1219 03:04:56.836474  573699 ssh_runner.go:195] Run: containerd --version
	I1219 03:04:56.864133  573699 ssh_runner.go:195] Run: containerd --version
	I1219 03:04:56.898122  573699 out.go:179] * Preparing Kubernetes v1.34.3 on containerd 2.2.0 ...
	I1219 03:04:53.197683  569947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.23269288s)
	I1219 03:04:53.197756  569947 ssh_runner.go:235] Completed: test -f /usr/bin/helm: (2.199861038s)
	I1219 03:04:53.197848  569947 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:04:53.197862  569947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.120620602s)
	I1219 03:04:53.198058  569947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.09074876s)
	I1219 03:04:53.198096  569947 addons.go:500] Verifying addon metrics-server=true in "no-preload-208281"
	I1219 03:04:53.198179  569947 api_server.go:72] duration metric: took 2.540661776s to wait for apiserver process to appear ...
	I1219 03:04:53.198202  569947 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:04:53.198229  569947 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1219 03:04:53.198445  569947 cli_runner.go:164] Run: docker container inspect no-preload-208281 --format={{.State.Status}}
	I1219 03:04:53.205510  569947 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:04:53.205637  569947 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:04:53.205671  569947 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:04:53.698608  569947 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1219 03:04:53.705658  569947 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:04:53.705697  569947 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:04:54.198361  569947 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1219 03:04:54.202897  569947 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1219 03:04:54.204079  569947 api_server.go:141] control plane version: v1.35.0-rc.1
	I1219 03:04:54.204114  569947 api_server.go:131] duration metric: took 1.005903946s to wait for apiserver health ...
	I1219 03:04:54.204127  569947 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:04:54.208336  569947 system_pods.go:59] 9 kube-system pods found
	I1219 03:04:54.208377  569947 system_pods.go:61] "coredns-7d764666f9-hm5hz" [59441d91-a2b7-4d87-86d1-5ccaaec4e398] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:04:54.208389  569947 system_pods.go:61] "etcd-no-preload-208281" [edfe3a0f-95b1-49ee-8843-456255c2c573] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:04:54.208403  569947 system_pods.go:61] "kindnet-zbmbl" [e7d80d3e-7bf1-4e49-b7f9-c0911bbae20d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1219 03:04:54.208424  569947 system_pods.go:61] "kube-apiserver-no-preload-208281" [cee547f9-b6ae-4654-b92b-5cd3c5caae01] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:04:54.208437  569947 system_pods.go:61] "kube-controller-manager-no-preload-208281" [ed375fa0-c03b-42d4-9887-cbe64ed19aeb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:04:54.208444  569947 system_pods.go:61] "kube-proxy-xst8w" [24d16e46-3e1f-4d38-a486-8f15642946c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:04:54.208460  569947 system_pods.go:61] "kube-scheduler-no-preload-208281" [65c63f44-2615-47ca-9323-d80a812af086] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:04:54.208472  569947 system_pods.go:61] "metrics-server-5d785b57d4-zgcxz" [743fe6aa-308c-4f80-b7f5-c753be058b69] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:04:54.208477  569947 system_pods.go:61] "storage-provisioner" [5bab6e7d-150b-4c8e-ab0a-933ec046c863] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:04:54.208488  569947 system_pods.go:74] duration metric: took 4.352835ms to wait for pod list to return data ...
	I1219 03:04:54.208503  569947 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:04:54.211346  569947 default_sa.go:45] found service account: "default"
	I1219 03:04:54.211373  569947 default_sa.go:55] duration metric: took 2.86243ms for default service account to be created ...
	I1219 03:04:54.211385  569947 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:04:54.214301  569947 system_pods.go:86] 9 kube-system pods found
	I1219 03:04:54.214337  569947 system_pods.go:89] "coredns-7d764666f9-hm5hz" [59441d91-a2b7-4d87-86d1-5ccaaec4e398] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:04:54.214347  569947 system_pods.go:89] "etcd-no-preload-208281" [edfe3a0f-95b1-49ee-8843-456255c2c573] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:04:54.214360  569947 system_pods.go:89] "kindnet-zbmbl" [e7d80d3e-7bf1-4e49-b7f9-c0911bbae20d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1219 03:04:54.214369  569947 system_pods.go:89] "kube-apiserver-no-preload-208281" [cee547f9-b6ae-4654-b92b-5cd3c5caae01] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:04:54.214377  569947 system_pods.go:89] "kube-controller-manager-no-preload-208281" [ed375fa0-c03b-42d4-9887-cbe64ed19aeb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:04:54.214386  569947 system_pods.go:89] "kube-proxy-xst8w" [24d16e46-3e1f-4d38-a486-8f15642946c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:04:54.214402  569947 system_pods.go:89] "kube-scheduler-no-preload-208281" [65c63f44-2615-47ca-9323-d80a812af086] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:04:54.214411  569947 system_pods.go:89] "metrics-server-5d785b57d4-zgcxz" [743fe6aa-308c-4f80-b7f5-c753be058b69] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:04:54.214421  569947 system_pods.go:89] "storage-provisioner" [5bab6e7d-150b-4c8e-ab0a-933ec046c863] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:04:54.214431  569947 system_pods.go:126] duration metric: took 3.039478ms to wait for k8s-apps to be running ...
	I1219 03:04:54.214443  569947 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:04:54.214504  569947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:04:54.371132  569947 ssh_runner.go:235] Completed: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh": (1.165499888s)
	I1219 03:04:54.371186  569947 system_svc.go:56] duration metric: took 156.734958ms WaitForService to wait for kubelet
	I1219 03:04:54.371215  569947 kubeadm.go:587] duration metric: took 3.713723941s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:04:54.371244  569947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:04:54.371246  569947 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:04:54.374625  569947 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:04:54.374660  569947 node_conditions.go:123] node cpu capacity is 8
	I1219 03:04:54.374679  569947 node_conditions.go:105] duration metric: took 3.423654ms to run NodePressure ...
	I1219 03:04:54.374695  569947 start.go:242] waiting for startup goroutines ...
	I1219 03:04:57.635651  569947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.264367144s)
	I1219 03:04:57.635887  569947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:04:57.949184  569947 addons.go:500] Verifying addon dashboard=true in "no-preload-208281"
	I1219 03:04:57.949557  569947 cli_runner.go:164] Run: docker container inspect no-preload-208281 --format={{.State.Status}}
	I1219 03:04:57.976511  569947 out.go:179] * Verifying dashboard addon...
	I1219 03:04:56.899304  573699 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-103644 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1219 03:04:56.919626  573699 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1219 03:04:56.924517  573699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:04:56.937946  573699 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-103644 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-103644 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 03:04:56.938108  573699 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1219 03:04:56.938182  573699 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:04:56.968240  573699 containerd.go:627] all images are preloaded for containerd runtime.
	I1219 03:04:56.968267  573699 containerd.go:534] Images already preloaded, skipping extraction
	I1219 03:04:56.968327  573699 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:04:56.997359  573699 containerd.go:627] all images are preloaded for containerd runtime.
	I1219 03:04:56.997383  573699 cache_images.go:86] Images are preloaded, skipping loading
	I1219 03:04:56.997392  573699 kubeadm.go:935] updating node { 192.168.94.2 8444 v1.34.3 containerd true true} ...
	I1219 03:04:56.997515  573699 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-103644 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-103644 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 03:04:56.997591  573699 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1219 03:04:57.033726  573699 cni.go:84] Creating CNI manager for ""
	I1219 03:04:57.033760  573699 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1219 03:04:57.033788  573699 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1219 03:04:57.033818  573699 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-103644 NodeName:default-k8s-diff-port-103644 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 03:04:57.034013  573699 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-103644"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 03:04:57.034110  573699 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1219 03:04:57.054291  573699 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 03:04:57.054366  573699 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 03:04:57.069183  573699 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I1219 03:04:57.092986  573699 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1219 03:04:57.114537  573699 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2240 bytes)
	I1219 03:04:57.135768  573699 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1219 03:04:57.141830  573699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:04:57.157200  573699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:04:57.285296  573699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:04:57.321401  573699 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644 for IP: 192.168.94.2
	I1219 03:04:57.321425  573699 certs.go:195] generating shared ca certs ...
	I1219 03:04:57.321445  573699 certs.go:227] acquiring lock for ca certs: {Name:mk50e31410087b5c6cdb0986368a8c9100618403 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:04:57.321651  573699 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-253859/.minikube/ca.key
	I1219 03:04:57.321728  573699 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-253859/.minikube/proxy-client-ca.key
	I1219 03:04:57.321741  573699 certs.go:257] generating profile certs ...
	I1219 03:04:57.321895  573699 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/client.key
	I1219 03:04:57.321969  573699 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/apiserver.key.eac4724a
	I1219 03:04:57.322032  573699 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/proxy-client.key
	I1219 03:04:57.322452  573699 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/257493.pem (1338 bytes)
	W1219 03:04:57.322563  573699 certs.go:480] ignoring /home/jenkins/minikube-integration/22230-253859/.minikube/certs/257493_empty.pem, impossibly tiny 0 bytes
	I1219 03:04:57.322947  573699 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca-key.pem (1679 bytes)
	I1219 03:04:57.323038  573699 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca.pem (1078 bytes)
	I1219 03:04:57.323130  573699 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/cert.pem (1123 bytes)
	I1219 03:04:57.323212  573699 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/key.pem (1675 bytes)
	I1219 03:04:57.323310  573699 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/files/etc/ssl/certs/2574932.pem (1708 bytes)
	I1219 03:04:57.324261  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 03:04:57.367430  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 03:04:57.395772  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 03:04:57.447975  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1219 03:04:57.485724  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1219 03:04:57.550160  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1219 03:04:57.586359  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 03:04:57.650368  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1219 03:04:57.705528  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/certs/257493.pem --> /usr/share/ca-certificates/257493.pem (1338 bytes)
	I1219 03:04:57.753827  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/files/etc/ssl/certs/2574932.pem --> /usr/share/ca-certificates/2574932.pem (1708 bytes)
	I1219 03:04:57.796129  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 03:04:57.846633  573699 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 03:04:57.874041  573699 ssh_runner.go:195] Run: openssl version
	I1219 03:04:57.883186  573699 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:04:57.893276  573699 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 03:04:57.903322  573699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:04:57.908713  573699 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:26 /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:04:57.908788  573699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:04:57.959424  573699 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 03:04:57.975955  573699 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/257493.pem
	I1219 03:04:57.987406  573699 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/257493.pem /etc/ssl/certs/257493.pem
	I1219 03:04:57.999924  573699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/257493.pem
	I1219 03:04:58.007017  573699 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 19 02:34 /usr/share/ca-certificates/257493.pem
	I1219 03:04:58.007094  573699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/257493.pem
	I1219 03:04:58.066450  573699 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1219 03:04:58.084889  573699 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2574932.pem
	I1219 03:04:58.104839  573699 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2574932.pem /etc/ssl/certs/2574932.pem
	I1219 03:04:58.121039  573699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2574932.pem
	I1219 03:04:58.128831  573699 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 19 02:34 /usr/share/ca-certificates/2574932.pem
	I1219 03:04:58.128908  573699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2574932.pem
	I1219 03:04:58.238719  573699 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1219 03:04:58.257473  573699 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 03:04:58.269077  573699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1219 03:04:58.373050  573699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1219 03:04:58.472122  573699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1219 03:04:58.523474  573699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1219 03:04:58.567812  573699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1219 03:04:58.624150  573699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1219 03:04:58.663023  573699 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-103644 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-103644 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:04:58.663147  573699 cri.go:57] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1219 03:04:58.663225  573699 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:04:58.698055  573699 cri.go:92] found id: "19baa8a9717c29181666ffb3efdceea762fabc94b416863c5b53dc5a31a91c9c"
	I1219 03:04:58.698124  573699 cri.go:92] found id: "c2591b42ec56d4e729eb5050bae8a11599a1d848e2b407c779bd43019d38acd7"
	I1219 03:04:58.698150  573699 cri.go:92] found id: "a8858dc4fe6cf1222bb421499d0aaa734e7a15be6fd8f63ebd4a4f91de0515c1"
	I1219 03:04:58.698161  573699 cri.go:92] found id: "fc945986f0d8b51799666a6eb31fb812b90a17842cc95aa6d352ef48b62f0652"
	I1219 03:04:58.698166  573699 cri.go:92] found id: "36e5d694c8907189486901b0aad40fae056b856f62180e718acb50ce029ecd0d"
	I1219 03:04:58.698176  573699 cri.go:92] found id: "ed906de27de9c3783be2432f68b3e79b562b368da4fe5ddde333748fe58c2534"
	I1219 03:04:58.698180  573699 cri.go:92] found id: "72384f1ad49d78ec6df6f3d6d752884b9f63349eb0cf50bb752be26fdba3141d"
	I1219 03:04:58.698185  573699 cri.go:92] found id: "872846ec96d2d949d4a7f1a644718c7d0f80e0e28fcde0d7425648a2ffc89358"
	I1219 03:04:58.698189  573699 cri.go:92] found id: "dd57b66fad064bb42b78fb88e0736406dd552174022391793749a16c31b46525"
	I1219 03:04:58.698201  573699 cri.go:92] found id: "ee8c252f3d8f4aee54318214731e5386b3c089ad31c19108f2f01301f0698503"
	I1219 03:04:58.698208  573699 cri.go:92] found id: "069eca43bbcc0eb20a0e387dc92839a2dc811dd9acdf65cf2e9fe7389f32d3cd"
	I1219 03:04:58.698212  573699 cri.go:92] found id: "49ae9ae9664179746d1cc4f0b2904783aa2c4b1e268ce918bf8eb4eec3c61233"
	I1219 03:04:58.698216  573699 cri.go:92] found id: ""
	I1219 03:04:58.698271  573699 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1219 03:04:58.725948  573699 cri.go:119] JSON = [{"ociVersion":"1.2.1","id":"0074e46caf6e41a18e3d0f5443787c1bc06e1d75e37fd459fd30404491fd7537","pid":862,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0074e46caf6e41a18e3d0f5443787c1bc06e1d75e37fd459fd30404491fd7537","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0074e46caf6e41a18e3d0f5443787c1bc06e1d75e37fd459fd30404491fd7537/rootfs","created":"2025-12-19T03:04:58.065318041Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"0074e46caf6e41a18e3d0f5443787c1bc06e1d75e37fd459fd30404491fd7537","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-default-k8s-diff-port-103644_50f4d1ce4fca33a4531f882f5fb97a4e","io.kubernetes.cri.sa
ndbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-default-k8s-diff-port-103644","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"50f4d1ce4fca33a4531f882f5fb97a4e"},"owner":"root"},{"ociVersion":"1.2.1","id":"19baa8a9717c29181666ffb3efdceea762fabc94b416863c5b53dc5a31a91c9c","pid":981,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/19baa8a9717c29181666ffb3efdceea762fabc94b416863c5b53dc5a31a91c9c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/19baa8a9717c29181666ffb3efdceea762fabc94b416863c5b53dc5a31a91c9c/rootfs","created":"2025-12-19T03:04:58.375811399Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.34.3","io.kubernetes.cri.sandbox-id":"6f1ccb334eeac83f482c9566bcf07d1b7f0691e38f246bd1e2b18a839382f2be","io.kubernetes.cri.sandbox-name":"kube-controller-manager-
default-k8s-diff-port-103644","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"ac53bb8a0832eefbaa4a648be6aad901"},"owner":"root"},{"ociVersion":"1.2.1","id":"6f1ccb334eeac83f482c9566bcf07d1b7f0691e38f246bd1e2b18a839382f2be","pid":855,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f1ccb334eeac83f482c9566bcf07d1b7f0691e38f246bd1e2b18a839382f2be","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f1ccb334eeac83f482c9566bcf07d1b7f0691e38f246bd1e2b18a839382f2be/rootfs","created":"2025-12-19T03:04:58.067793692Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"6f1ccb334eeac83f482c9566bcf07d1b7f0691e38f246bd1e2b18a839382f2be","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube
-system_kube-controller-manager-default-k8s-diff-port-103644_ac53bb8a0832eefbaa4a648be6aad901","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-default-k8s-diff-port-103644","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"ac53bb8a0832eefbaa4a648be6aad901"},"owner":"root"},{"ociVersion":"1.2.1","id":"6fa84325f8005a8cda67fb08f0d9185af05913fd2ed0468866c0616c98bcca9f","pid":834,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6fa84325f8005a8cda67fb08f0d9185af05913fd2ed0468866c0616c98bcca9f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6fa84325f8005a8cda67fb08f0d9185af05913fd2ed0468866c0616c98bcca9f/rootfs","created":"2025-12-19T03:04:58.050783422Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernet
es.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"6fa84325f8005a8cda67fb08f0d9185af05913fd2ed0468866c0616c98bcca9f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-default-k8s-diff-port-103644_996cf4b38188d4b0d664648ad2102013","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-default-k8s-diff-port-103644","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"996cf4b38188d4b0d664648ad2102013"},"owner":"root"},{"ociVersion":"1.2.1","id":"84008f4f31b82c848a4941fccaee8d78c5d94eb07bb5b35b53ea6f10f5e460bc","pid":796,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/84008f4f31b82c848a4941fccaee8d78c5d94eb07bb5b35b53ea6f10f5e460bc","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/84008f4f31b82c848a4941fccaee8d78c5d94eb07bb5b35b53ea6f10f5e460bc/rootfs","created":"2025-12-19T03:04:58.031779484Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","
io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"84008f4f31b82c848a4941fccaee8d78c5d94eb07bb5b35b53ea6f10f5e460bc","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-default-k8s-diff-port-103644_4275d7c883d3f735b8de47264bc63415","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-default-k8s-diff-port-103644","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"4275d7c883d3f735b8de47264bc63415"},"owner":"root"},{"ociVersion":"1.2.1","id":"a8858dc4fe6cf1222bb421499d0aaa734e7a15be6fd8f63ebd4a4f91de0515c1","pid":951,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a8858dc4fe6cf1222bb421499d0aaa734e7a15be6fd8f63ebd4a4f91de0515c1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a8858dc4fe6cf1222bb4214
99d0aaa734e7a15be6fd8f63ebd4a4f91de0515c1/rootfs","created":"2025-12-19T03:04:58.294875595Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.34.3","io.kubernetes.cri.sandbox-id":"6fa84325f8005a8cda67fb08f0d9185af05913fd2ed0468866c0616c98bcca9f","io.kubernetes.cri.sandbox-name":"kube-apiserver-default-k8s-diff-port-103644","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"996cf4b38188d4b0d664648ad2102013"},"owner":"root"},{"ociVersion":"1.2.1","id":"c2591b42ec56d4e729eb5050bae8a11599a1d848e2b407c779bd43019d38acd7","pid":969,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c2591b42ec56d4e729eb5050bae8a11599a1d848e2b407c779bd43019d38acd7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c2591b42ec56d4e729eb5050bae8a11599a1d848e2b407c779bd43019d38acd7/rootfs","created":"2025-12-19T03:04:58.293243949Z","
annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.34.3","io.kubernetes.cri.sandbox-id":"0074e46caf6e41a18e3d0f5443787c1bc06e1d75e37fd459fd30404491fd7537","io.kubernetes.cri.sandbox-name":"kube-scheduler-default-k8s-diff-port-103644","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"50f4d1ce4fca33a4531f882f5fb97a4e"},"owner":"root"},{"ociVersion":"1.2.1","id":"fc945986f0d8b51799666a6eb31fb812b90a17842cc95aa6d352ef48b62f0652","pid":915,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fc945986f0d8b51799666a6eb31fb812b90a17842cc95aa6d352ef48b62f0652","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fc945986f0d8b51799666a6eb31fb812b90a17842cc95aa6d352ef48b62f0652/rootfs","created":"2025-12-19T03:04:58.225549561Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"co
ntainer","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.6.5-0","io.kubernetes.cri.sandbox-id":"84008f4f31b82c848a4941fccaee8d78c5d94eb07bb5b35b53ea6f10f5e460bc","io.kubernetes.cri.sandbox-name":"etcd-default-k8s-diff-port-103644","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"4275d7c883d3f735b8de47264bc63415"},"owner":"root"}]
	I1219 03:04:58.726160  573699 cri.go:129] list returned 8 containers
	I1219 03:04:58.726176  573699 cri.go:132] container: {ID:0074e46caf6e41a18e3d0f5443787c1bc06e1d75e37fd459fd30404491fd7537 Status:running}
	I1219 03:04:58.726215  573699 cri.go:134] skipping 0074e46caf6e41a18e3d0f5443787c1bc06e1d75e37fd459fd30404491fd7537 - not in ps
	I1219 03:04:58.726225  573699 cri.go:132] container: {ID:19baa8a9717c29181666ffb3efdceea762fabc94b416863c5b53dc5a31a91c9c Status:running}
	I1219 03:04:58.726238  573699 cri.go:138] skipping {19baa8a9717c29181666ffb3efdceea762fabc94b416863c5b53dc5a31a91c9c running}: state = "running", want "paused"
	I1219 03:04:58.726253  573699 cri.go:132] container: {ID:6f1ccb334eeac83f482c9566bcf07d1b7f0691e38f246bd1e2b18a839382f2be Status:running}
	I1219 03:04:58.726263  573699 cri.go:134] skipping 6f1ccb334eeac83f482c9566bcf07d1b7f0691e38f246bd1e2b18a839382f2be - not in ps
	I1219 03:04:58.726272  573699 cri.go:132] container: {ID:6fa84325f8005a8cda67fb08f0d9185af05913fd2ed0468866c0616c98bcca9f Status:running}
	I1219 03:04:58.726282  573699 cri.go:134] skipping 6fa84325f8005a8cda67fb08f0d9185af05913fd2ed0468866c0616c98bcca9f - not in ps
	I1219 03:04:58.726287  573699 cri.go:132] container: {ID:84008f4f31b82c848a4941fccaee8d78c5d94eb07bb5b35b53ea6f10f5e460bc Status:running}
	I1219 03:04:58.726296  573699 cri.go:134] skipping 84008f4f31b82c848a4941fccaee8d78c5d94eb07bb5b35b53ea6f10f5e460bc - not in ps
	I1219 03:04:58.726300  573699 cri.go:132] container: {ID:a8858dc4fe6cf1222bb421499d0aaa734e7a15be6fd8f63ebd4a4f91de0515c1 Status:running}
	I1219 03:04:58.726310  573699 cri.go:138] skipping {a8858dc4fe6cf1222bb421499d0aaa734e7a15be6fd8f63ebd4a4f91de0515c1 running}: state = "running", want "paused"
	I1219 03:04:58.726317  573699 cri.go:132] container: {ID:c2591b42ec56d4e729eb5050bae8a11599a1d848e2b407c779bd43019d38acd7 Status:running}
	I1219 03:04:58.726327  573699 cri.go:138] skipping {c2591b42ec56d4e729eb5050bae8a11599a1d848e2b407c779bd43019d38acd7 running}: state = "running", want "paused"
	I1219 03:04:58.726334  573699 cri.go:132] container: {ID:fc945986f0d8b51799666a6eb31fb812b90a17842cc95aa6d352ef48b62f0652 Status:running}
	I1219 03:04:58.726341  573699 cri.go:138] skipping {fc945986f0d8b51799666a6eb31fb812b90a17842cc95aa6d352ef48b62f0652 running}: state = "running", want "paused"
	I1219 03:04:58.726406  573699 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1219 03:04:58.736002  573699 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1219 03:04:58.736024  573699 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1219 03:04:58.736083  573699 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1219 03:04:58.745325  573699 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1219 03:04:58.746851  573699 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-103644" does not appear in /home/jenkins/minikube-integration/22230-253859/kubeconfig
	I1219 03:04:58.747840  573699 kubeconfig.go:62] /home/jenkins/minikube-integration/22230-253859/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-103644" cluster setting kubeconfig missing "default-k8s-diff-port-103644" context setting]
	I1219 03:04:58.749236  573699 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-253859/kubeconfig: {Name:mk882428f840659847d4e22cffee2d7775067610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:04:58.751783  573699 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1219 03:04:58.761185  573699 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1219 03:04:58.761233  573699 kubeadm.go:602] duration metric: took 25.202742ms to restartPrimaryControlPlane
	I1219 03:04:58.761245  573699 kubeadm.go:403] duration metric: took 98.23938ms to StartCluster
	I1219 03:04:58.761266  573699 settings.go:142] acquiring lock: {Name:mkabb1ebf75b28a37c7b2b053110889b555ff453 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:04:58.761344  573699 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-253859/kubeconfig
	I1219 03:04:58.763956  573699 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-253859/kubeconfig: {Name:mk882428f840659847d4e22cffee2d7775067610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:04:58.764278  573699 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1219 03:04:58.764352  573699 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:04:58.764458  573699 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-103644"
	I1219 03:04:58.764482  573699 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-103644"
	W1219 03:04:58.764491  573699 addons.go:248] addon storage-provisioner should already be in state true
	I1219 03:04:58.764498  573699 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-103644"
	I1219 03:04:58.764518  573699 config.go:182] Loaded profile config "default-k8s-diff-port-103644": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 03:04:58.764533  573699 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-103644"
	I1219 03:04:58.764530  573699 addons.go:70] Setting metrics-server=true in profile "default-k8s-diff-port-103644"
	I1219 03:04:58.764551  573699 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-103644"
	I1219 03:04:58.764557  573699 addons.go:239] Setting addon metrics-server=true in "default-k8s-diff-port-103644"
	I1219 03:04:58.764521  573699 host.go:66] Checking if "default-k8s-diff-port-103644" exists ...
	I1219 03:04:58.764523  573699 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-103644"
	W1219 03:04:58.764565  573699 addons.go:248] addon metrics-server should already be in state true
	I1219 03:04:58.764660  573699 host.go:66] Checking if "default-k8s-diff-port-103644" exists ...
	I1219 03:04:58.764898  573699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103644 --format={{.State.Status}}
	I1219 03:04:58.765067  573699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103644 --format={{.State.Status}}
	W1219 03:04:58.764563  573699 addons.go:248] addon dashboard should already be in state true
	I1219 03:04:58.765224  573699 host.go:66] Checking if "default-k8s-diff-port-103644" exists ...
	I1219 03:04:58.765244  573699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103644 --format={{.State.Status}}
	I1219 03:04:58.765778  573699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103644 --format={{.State.Status}}
	I1219 03:04:58.766439  573699 out.go:179] * Verifying Kubernetes components...
	I1219 03:04:58.769848  573699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:04:58.795158  573699 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-103644"
	W1219 03:04:58.795295  573699 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:04:58.795354  573699 host.go:66] Checking if "default-k8s-diff-port-103644" exists ...
	I1219 03:04:58.796260  573699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103644 --format={{.State.Status}}
	I1219 03:04:58.798810  573699 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1219 03:04:58.798816  573699 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:04:57.865290  566718 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:04:57.865322  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:58.373051  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:58.867408  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:59.364332  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:55.245497  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:55.746387  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:56.245217  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:56.749455  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:57.246279  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:57.748208  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:58.247627  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:58.745395  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:59.247400  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:59.747210  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:58.799225  573699 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:04:58.799247  573699 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:04:58.799304  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:58.799993  573699 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1219 03:04:58.800017  573699 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1219 03:04:58.800075  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:58.800356  573699 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:04:58.800371  573699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:04:58.800429  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:58.837919  573699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/default-k8s-diff-port-103644/id_rsa Username:docker}
	I1219 03:04:58.838753  573699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/default-k8s-diff-port-103644/id_rsa Username:docker}
	I1219 03:04:58.846681  573699 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:04:58.846725  573699 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:04:58.846799  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:58.869014  573699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/default-k8s-diff-port-103644/id_rsa Username:docker}
	I1219 03:04:58.891596  573699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/default-k8s-diff-port-103644/id_rsa Username:docker}
	I1219 03:04:58.990117  573699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:04:59.008626  573699 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:04:59.009409  573699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:04:59.016187  573699 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-103644" to be "Ready" ...
	I1219 03:04:59.016907  573699 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:04:59.044939  573699 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1219 03:04:59.044973  573699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1219 03:04:59.048120  573699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:04:59.087063  573699 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1219 03:04:59.087153  573699 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1219 03:04:59.114132  573699 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:04:59.114163  573699 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1219 03:04:59.144085  573699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:05:00.372562  573699 node_ready.go:49] node "default-k8s-diff-port-103644" is "Ready"
	I1219 03:05:00.372622  573699 node_ready.go:38] duration metric: took 1.356373278s for node "default-k8s-diff-port-103644" to be "Ready" ...
	I1219 03:05:00.372644  573699 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:05:00.372706  573699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:04:57.979521  569947 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:04:57.983495  569947 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:04:57.983523  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:58.489816  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:58.984080  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:59.484148  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:59.983915  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:00.484939  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:00.985080  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:01.486418  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:01.986557  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:02.484684  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:59.866115  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:00.365239  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:00.866184  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:01.366415  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:01.863549  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:02.364375  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:02.863998  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:03.363890  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:03.863749  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:04.382768  566718 kapi.go:107] duration metric: took 12.523639555s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	I1219 03:05:04.433515  566718 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-002036 addons enable metrics-server
	
	I1219 03:05:04.435631  566718 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1219 03:05:04.437408  566718 addons.go:546] duration metric: took 22.668379604s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I1219 03:05:04.437463  566718 start.go:247] waiting for cluster config update ...
	I1219 03:05:04.437482  566718 start.go:256] writing updated cluster config ...
	I1219 03:05:04.437853  566718 ssh_runner.go:195] Run: rm -f paused
	I1219 03:05:04.443668  566718 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:05:04.450779  566718 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-l88tx" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:00.248093  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:00.749216  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:01.247778  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:01.747890  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:02.245449  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:02.746684  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:03.247359  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:03.746557  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:04.245966  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:04.746278  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:01.448117  573699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.43867528s)
	I1219 03:05:01.448182  573699 ssh_runner.go:235] Completed: test -f /usr/local/bin/helm: (2.431240621s)
	I1219 03:05:01.448196  573699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.399991052s)
	I1219 03:05:01.448260  573699 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:05:01.448385  573699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.304270108s)
	I1219 03:05:01.448406  573699 addons.go:500] Verifying addon metrics-server=true in "default-k8s-diff-port-103644"
	I1219 03:05:01.448485  573699 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.075756393s)
	I1219 03:05:01.448520  573699 api_server.go:72] duration metric: took 2.684209271s to wait for apiserver process to appear ...
	I1219 03:05:01.448536  573699 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:05:01.448558  573699 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1219 03:05:01.448716  573699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103644 --format={{.State.Status}}
	I1219 03:05:01.458744  573699 api_server.go:279] https://192.168.94.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:05:01.458783  573699 api_server.go:103] status: https://192.168.94.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:05:01.950069  573699 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1219 03:05:01.959300  573699 api_server.go:279] https://192.168.94.2:8444/healthz returned 200:
	ok
	I1219 03:05:01.960703  573699 api_server.go:141] control plane version: v1.34.3
	I1219 03:05:01.960739  573699 api_server.go:131] duration metric: took 512.19419ms to wait for apiserver health ...
	I1219 03:05:01.960751  573699 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:05:01.965477  573699 system_pods.go:59] 9 kube-system pods found
	I1219 03:05:01.965544  573699 system_pods.go:61] "coredns-66bc5c9577-86vsf" [d2b924f3-ac71-431b-a3e6-f85f1e0b94e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:01.965560  573699 system_pods.go:61] "etcd-default-k8s-diff-port-103644" [ececfad7-09c9-4851-9fda-c468648a6e3f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:05:01.965595  573699 system_pods.go:61] "kindnet-vgs5z" [3a78062f-cab2-4e56-bc36-33ecf9505255] Running
	I1219 03:05:01.965611  573699 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-103644" [c5859d2c-4337-4b88-a46f-695c3ac4f9c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:05:01.965623  573699 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-103644" [57334df1-410d-4993-936c-c6cf1604c166] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:05:01.965631  573699 system_pods.go:61] "kube-proxy-lgw6f" [3b4461b1-0b30-427d-9e31-107cea049612] Running
	I1219 03:05:01.965640  573699 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-103644" [e44d65aa-7d39-4020-b9d6-4473f92a8f90] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:05:01.965653  573699 system_pods.go:61] "metrics-server-746fcd58dc-tctv8" [37ff7895-b382-407b-9032-56a428173579] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:05:01.965660  573699 system_pods.go:61] "storage-provisioner" [f12460c5-0196-4171-a44f-31b13af14f9f] Running
	I1219 03:05:01.965670  573699 system_pods.go:74] duration metric: took 4.91154ms to wait for pod list to return data ...
	I1219 03:05:01.965682  573699 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:05:01.969223  573699 default_sa.go:45] found service account: "default"
	I1219 03:05:01.969255  573699 default_sa.go:55] duration metric: took 3.563468ms for default service account to be created ...
	I1219 03:05:01.969269  573699 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:05:01.973647  573699 system_pods.go:86] 9 kube-system pods found
	I1219 03:05:01.973775  573699 system_pods.go:89] "coredns-66bc5c9577-86vsf" [d2b924f3-ac71-431b-a3e6-f85f1e0b94e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:01.973790  573699 system_pods.go:89] "etcd-default-k8s-diff-port-103644" [ececfad7-09c9-4851-9fda-c468648a6e3f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:05:01.973797  573699 system_pods.go:89] "kindnet-vgs5z" [3a78062f-cab2-4e56-bc36-33ecf9505255] Running
	I1219 03:05:01.973804  573699 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-103644" [c5859d2c-4337-4b88-a46f-695c3ac4f9c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:05:01.973810  573699 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-103644" [57334df1-410d-4993-936c-c6cf1604c166] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:05:01.973828  573699 system_pods.go:89] "kube-proxy-lgw6f" [3b4461b1-0b30-427d-9e31-107cea049612] Running
	I1219 03:05:01.973834  573699 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-103644" [e44d65aa-7d39-4020-b9d6-4473f92a8f90] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:05:01.973840  573699 system_pods.go:89] "metrics-server-746fcd58dc-tctv8" [37ff7895-b382-407b-9032-56a428173579] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:05:01.973843  573699 system_pods.go:89] "storage-provisioner" [f12460c5-0196-4171-a44f-31b13af14f9f] Running
	I1219 03:05:01.973852  573699 system_pods.go:126] duration metric: took 4.574679ms to wait for k8s-apps to be running ...
	I1219 03:05:01.973859  573699 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:05:01.973912  573699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:05:02.653061  573699 ssh_runner.go:235] Completed: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh": (1.204735295s)
	I1219 03:05:02.653137  573699 system_svc.go:56] duration metric: took 679.266214ms WaitForService to wait for kubelet
	I1219 03:05:02.653168  573699 kubeadm.go:587] duration metric: took 3.888855367s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:05:02.653197  573699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:05:02.653199  573699 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:05:02.656332  573699 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:05:02.656365  573699 node_conditions.go:123] node cpu capacity is 8
	I1219 03:05:02.656382  573699 node_conditions.go:105] duration metric: took 3.090983ms to run NodePressure ...
	I1219 03:05:02.656398  573699 start.go:242] waiting for startup goroutines ...
	I1219 03:05:05.900902  573699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.247656336s)
	I1219 03:05:05.901008  573699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:05:06.370072  573699 addons.go:500] Verifying addon dashboard=true in "default-k8s-diff-port-103644"
	I1219 03:05:06.370443  573699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103644 --format={{.State.Status}}
	I1219 03:05:06.413077  573699 out.go:179] * Verifying dashboard addon...
	I1219 03:05:02.984573  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:03.483377  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:03.983965  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:04.483784  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:04.983862  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:05.484412  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:05.985034  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:06.484458  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:06.983536  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:07.484546  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	W1219 03:05:06.463527  566718 pod_ready.go:104] pod "coredns-5dd5756b68-l88tx" is not "Ready", error: <nil>
	W1219 03:05:08.958366  566718 pod_ready.go:104] pod "coredns-5dd5756b68-l88tx" is not "Ready", error: <nil>
	I1219 03:05:05.245656  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:05.747655  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:06.245722  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:06.748049  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:07.245806  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:07.806712  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:08.317551  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:08.746359  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:09.246666  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:09.745789  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:06.432631  573699 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:05:06.442236  573699 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:05:06.442267  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:06.938273  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:07.436226  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:07.935844  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:08.437222  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:08.937396  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:09.436432  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:09.937420  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:10.436795  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:07.982775  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:08.484705  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:08.983943  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:09.483954  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:09.984850  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:10.484036  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:10.985868  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:11.484253  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:11.984283  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:12.483325  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	W1219 03:05:11.457419  566718 pod_ready.go:104] pod "coredns-5dd5756b68-l88tx" is not "Ready", error: <nil>
	W1219 03:05:13.957361  566718 pod_ready.go:104] pod "coredns-5dd5756b68-l88tx" is not "Ready", error: <nil>
	I1219 03:05:10.247114  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:10.746688  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:11.246179  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:11.747053  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:12.245687  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:12.745963  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:13.245905  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:13.745641  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:14.245181  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:14.746110  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:10.937352  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:11.436009  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:11.937001  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:12.437140  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:12.937021  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:13.436775  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:13.936272  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:14.435921  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:14.937045  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:15.436754  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:12.983838  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:13.483669  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:13.983389  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:14.483140  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:14.983678  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:15.483333  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:15.983426  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:16.483195  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:16.982683  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:17.483883  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	W1219 03:05:16.457830  566718 pod_ready.go:104] pod "coredns-5dd5756b68-l88tx" is not "Ready", error: <nil>
	W1219 03:05:18.956955  566718 pod_ready.go:104] pod "coredns-5dd5756b68-l88tx" is not "Ready", error: <nil>
	I1219 03:05:15.245238  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:15.746028  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:16.245738  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:16.746152  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:17.245944  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:17.745478  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:18.244810  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:18.745484  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:19.245267  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:19.747027  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:15.935367  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:16.437144  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:16.936697  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:17.436257  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:17.938151  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:18.436806  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:18.936368  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:19.436056  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:19.936823  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:20.436574  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:20.956728  566718 pod_ready.go:94] pod "coredns-5dd5756b68-l88tx" is "Ready"
	I1219 03:05:20.956755  566718 pod_ready.go:86] duration metric: took 16.505943894s for pod "coredns-5dd5756b68-l88tx" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:20.959784  566718 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-002036" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:20.964097  566718 pod_ready.go:94] pod "etcd-old-k8s-version-002036" is "Ready"
	I1219 03:05:20.964121  566718 pod_ready.go:86] duration metric: took 4.312579ms for pod "etcd-old-k8s-version-002036" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:20.967209  566718 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-002036" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:20.971311  566718 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-002036" is "Ready"
	I1219 03:05:20.971340  566718 pod_ready.go:86] duration metric: took 4.107095ms for pod "kube-apiserver-old-k8s-version-002036" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:20.974403  566718 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-002036" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:21.155192  566718 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-002036" is "Ready"
	I1219 03:05:21.155230  566718 pod_ready.go:86] duration metric: took 180.802142ms for pod "kube-controller-manager-old-k8s-version-002036" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:21.356374  566718 pod_ready.go:83] waiting for pod "kube-proxy-666m9" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:21.755068  566718 pod_ready.go:94] pod "kube-proxy-666m9" is "Ready"
	I1219 03:05:21.755101  566718 pod_ready.go:86] duration metric: took 398.695005ms for pod "kube-proxy-666m9" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:21.955309  566718 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-002036" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:22.355240  566718 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-002036" is "Ready"
	I1219 03:05:22.355268  566718 pod_ready.go:86] duration metric: took 399.930732ms for pod "kube-scheduler-old-k8s-version-002036" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:22.355280  566718 pod_ready.go:40] duration metric: took 17.911572961s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:05:22.403101  566718 start.go:625] kubectl: 1.35.0, cluster: 1.28.0 (minor skew: 7)
	I1219 03:05:22.405195  566718 out.go:203] 
	W1219 03:05:22.406549  566718 out.go:285] ! /usr/local/bin/kubectl is version 1.35.0, which may have incompatibilities with Kubernetes 1.28.0.
	I1219 03:05:22.407721  566718 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1219 03:05:22.409075  566718 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-002036" cluster and "default" namespace by default
	I1219 03:05:17.983934  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:18.483978  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:18.983469  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:19.483031  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:19.983935  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:20.483856  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:20.983202  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:21.482976  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:21.983682  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:22.483477  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:20.246405  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:20.745732  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:21.246211  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:21.745513  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:22.246072  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:22.746161  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:23.245454  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:23.745802  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:24.246011  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:24.745886  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:20.936632  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:21.436414  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:21.937387  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:22.438356  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:22.936036  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:23.436638  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:23.936345  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:24.436285  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:24.936343  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:25.436214  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:22.983526  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:23.483608  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:23.984007  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:24.483768  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:24.983330  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:25.483626  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:25.983245  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:26.483703  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:26.983688  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:27.483645  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:25.245298  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:25.745913  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:26.246357  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:26.746837  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:27.245727  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:27.745064  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:28.245698  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:28.745390  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:29.245749  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:29.746545  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:25.936442  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:26.436179  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:26.936807  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:27.436692  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:27.936427  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:28.436416  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:28.936100  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:29.436165  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:29.936887  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:30.437744  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:27.983729  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:28.484151  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:28.982796  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:29.483575  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:29.983807  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:30.484546  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:30.983311  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:31.482703  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:31.984055  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:32.483041  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:30.245841  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:30.746191  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:31.246984  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:31.746555  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:32.245535  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:32.745692  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:33.245430  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:33.746001  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:34.245532  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:34.745216  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:30.936806  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:31.437044  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:31.937073  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:32.436137  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:32.937365  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:33.436414  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:33.936352  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:34.435813  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:34.936438  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:35.435923  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:32.984055  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:33.483382  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:33.984500  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:34.483032  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:34.984071  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:35.482466  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:35.983161  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:36.482900  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:36.983524  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:37.483681  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:35.245754  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:35.745276  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:36.246044  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:36.747272  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:37.246098  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:37.746535  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:38.245821  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:38.745937  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:39.245762  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:39.745615  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:35.936381  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:36.436916  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:36.936622  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:37.436000  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:37.937259  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:38.437162  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:38.937047  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:39.437352  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:39.936682  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:40.436615  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:37.983600  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:38.483773  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:38.983567  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:39.483752  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:39.983264  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:40.483703  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:40.983322  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:41.483362  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:41.983957  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:42.484274  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:40.246185  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:40.745901  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:41.245621  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:41.746459  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:42.246128  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:42.745336  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:43.245863  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:43.745349  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:44.245848  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:44.745183  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:40.938808  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:41.437447  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:41.936560  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:42.436119  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:42.935681  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:43.436727  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:43.936625  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:44.436379  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:44.936023  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:45.436852  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:42.983002  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:43.484428  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:43.983133  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:44.484439  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:44.983087  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:45.483617  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:45.983057  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:46.483126  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:46.982743  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:47.483122  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:45.245621  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:45.747099  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:46.245089  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:46.746901  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:47.245684  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:47.745166  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:48.245353  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:48.745700  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:49.245083  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:49.745319  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:45.936637  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:46.436382  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:46.935972  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:47.436262  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:47.937175  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:48.435775  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:48.936174  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:49.436927  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:49.936454  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:50.436467  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:47.983769  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:48.483797  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:48.983564  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:49.484562  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:49.983390  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:50.483073  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:50.984121  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:51.482952  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:51.983943  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:52.483850  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:50.245533  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:50.746378  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:51.246407  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:51.746164  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:52.245475  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:52.746473  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:53.245686  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:53.745616  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:54.246701  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:54.746221  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:50.937461  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:51.437102  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:51.937100  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:52.436658  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:52.936554  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:53.436723  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:53.935859  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:54.436301  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:54.936888  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:55.435693  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:52.983429  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:53.484287  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:53.983438  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:54.484255  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:54.982975  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:55.483937  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:55.984116  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:56.483076  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:56.982483  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:57.484172  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:55.245635  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:55.746068  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:56.245613  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:56.746431  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:57.245784  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:57.746179  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:58.246036  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:58.745916  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.246105  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.745511  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:55.936404  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:56.436974  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:56.937181  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:57.436933  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:57.936461  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:58.435893  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:58.936715  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.435977  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.936537  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:00.436413  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:57.984117  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:58.483494  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:58.983431  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.483144  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.983693  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:00.483725  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:00.983769  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:01.483568  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:01.983844  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:02.484041  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:00.247210  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:00.745901  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:01.246917  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:01.746507  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:02.246482  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:02.745791  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:03.246149  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:03.745750  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:04.246542  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:04.746182  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:00.935753  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:01.437052  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:01.936399  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:02.437035  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:02.936175  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:03.437157  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:03.936167  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:04.437079  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:04.936622  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:05.435994  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:02.984052  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:03.484159  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:03.983491  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:04.483027  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:04.984206  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:05.482988  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:05.984416  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:06.482988  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:06.983673  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:07.483363  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:05.245974  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:05.745879  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:06.246325  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:06.746954  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:07.246178  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:07.746530  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:08.246617  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:08.746319  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:09.246086  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:09.745852  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:05.937050  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:06.436626  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:06.935960  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:07.436359  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:07.936462  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:08.436428  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:08.936121  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:09.436717  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:09.936724  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:10.436653  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:07.983609  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:08.483348  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:08.983602  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:09.483970  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:09.984565  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:10.483846  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:10.983764  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:11.483646  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:11.983995  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:12.483230  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:10.246294  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:10.746747  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:11.245812  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:11.746679  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:12.246641  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:12.745478  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:13.245869  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:13.745759  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:14.245568  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:14.746073  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:10.936517  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:11.435795  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:11.937696  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:12.436353  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:12.935510  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:13.436005  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:13.936614  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:14.436666  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:14.937104  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:15.436494  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:12.982961  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:13.483812  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:13.984205  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:14.484367  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:14.983535  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:15.483245  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:15.982974  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:16.483840  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:16.983639  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:17.483076  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:15.245741  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:15.746076  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:16.245863  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:16.746268  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:17.245914  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:17.745460  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:18.246201  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:18.745720  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:19.246075  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:19.746406  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:15.936573  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:16.436355  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:16.935609  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:17.436112  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:17.936695  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:18.436177  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:18.936615  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:19.436180  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:19.936693  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:20.436473  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:17.984187  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:18.484214  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:18.983011  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:19.483899  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:19.984512  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:20.482716  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:20.983406  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:21.483536  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:21.985122  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:22.483290  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:20.246645  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:20.746554  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:21.245477  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:21.746237  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:22.246559  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:22.746156  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:23.245694  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:23.744920  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:24.246400  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:24.745171  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:20.936301  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:21.435818  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:21.936473  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:22.436319  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:22.937291  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:23.436967  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:23.936394  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:24.436573  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:24.936226  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:25.436480  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:22.983215  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:23.483166  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:23.983561  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:24.484067  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:24.983180  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:25.483488  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:25.983441  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:26.482752  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:26.983544  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:27.482808  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:25.245475  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:25.746511  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:26.245967  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:26.746303  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:27.245996  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:27.745286  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:28.246778  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:28.745279  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:29.245781  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:29.745086  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:25.936101  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:26.437131  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:26.936600  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:27.436041  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:27.937177  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:28.437421  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:28.935735  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:29.436019  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:29.936601  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:30.437190  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:27.984252  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:28.483837  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:28.983552  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:29.483241  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:29.983514  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:30.482704  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:30.983246  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:31.482944  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:31.984320  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:32.483797  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:30.246209  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:30.745803  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:31.245503  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:31.746263  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:32.246768  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:32.745863  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:33.245185  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:33.745549  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:34.245747  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:34.746416  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:30.935759  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:31.435954  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:31.936473  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:32.436706  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:32.936420  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:33.436605  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:33.937043  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:34.437152  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:34.936804  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:35.436211  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:32.983286  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:33.483036  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:33.984179  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:34.485767  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:34.983683  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:35.484067  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:35.983146  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:36.483037  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:36.982889  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:37.483748  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:35.245980  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:35.745904  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:36.246066  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:36.747073  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:37.246061  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:37.746010  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:38.246278  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:38.745926  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:39.245654  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:39.745463  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:35.935859  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:36.437530  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:36.936253  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:37.436942  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:37.936253  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:38.437229  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:38.936794  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:39.436501  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:39.936447  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:40.436258  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:37.983789  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:38.483692  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:38.983255  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:39.483001  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:39.982877  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:40.483721  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:40.983399  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:41.482771  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:41.983968  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:42.483847  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:40.246603  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:40.745229  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:41.245985  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:41.746233  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:42.246354  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:42.746354  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:43.245729  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:43.745993  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:44.246174  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:44.745977  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:40.936569  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:41.436604  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:41.936997  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:42.436608  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:42.936332  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:43.436076  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:43.937096  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:44.437052  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:44.936644  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:45.436313  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:42.983561  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:43.483231  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:43.983328  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:44.483130  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:44.983671  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:45.484255  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:45.984498  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:46.483267  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:46.982818  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:47.483172  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:45.246007  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:45.745814  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:46.246281  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:46.746636  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:47.245338  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:47.746505  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:48.246541  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:48.745349  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:49.246003  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:49.746025  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:45.935627  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:46.437425  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:46.937256  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:47.436775  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:47.936905  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:48.436271  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:48.936170  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:49.436681  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:49.937261  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:50.436230  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:47.983908  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:48.483646  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:48.983761  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:49.483697  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:49.983928  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:50.484339  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:50.983038  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:51.483830  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:51.983519  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:52.482841  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:50.246203  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:50.745909  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:51.245212  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:51.746317  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:52.246429  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:52.746706  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:53.245252  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:53.746054  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:54.248935  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:54.745879  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:50.936569  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:51.436150  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:51.937541  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:52.436306  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:52.937380  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:53.437032  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:53.937256  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:54.437101  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:54.936394  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:55.435707  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:52.983425  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:53.482996  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:53.984413  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:54.483150  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:54.983223  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:55.483220  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:55.983167  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:56.482640  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:56.983417  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:57.483783  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:55.245215  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:55.745981  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:56.246277  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:56.747053  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:57.245861  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:57.745707  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:58.245371  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:58.746200  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:59.245515  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:59.745912  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:55.936135  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:56.437841  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:56.936910  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:57.436323  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:57.936660  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:58.436524  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:58.936221  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:59.436563  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:59.935913  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:00.436645  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:57.984125  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:58.483388  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:58.982737  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:59.483773  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:59.983545  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:00.483422  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:00.983154  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:01.483664  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:01.983641  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:02.483442  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:00.245728  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:00.745308  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:01.246025  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:01.745765  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:02.246408  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:02.746848  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:03.245127  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:03.746104  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:04.246223  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:04.745981  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:00.936306  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:01.437231  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:01.937148  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:02.437052  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:02.936729  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:03.436019  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:03.936896  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:04.436656  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:04.936521  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:05.435899  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:02.983225  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:03.483720  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:03.983254  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:04.483258  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:04.983295  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:05.483964  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:05.984348  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:06.483161  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:06.983777  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:07.483360  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:05.245839  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:05.745805  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:06.245976  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:06.745981  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:07.245994  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:07.745694  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:08.245465  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:08.748052  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:09.245632  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:09.745648  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:05.936748  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:06.436721  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:06.935970  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:07.436670  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:07.936857  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:08.436351  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:08.936092  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:09.436265  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:09.936566  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:10.437204  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:07.983552  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:08.483025  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:08.984084  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:09.482696  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:09.984384  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:10.482907  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:10.983542  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:11.483867  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:11.983960  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:12.484193  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:10.246433  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:10.745045  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:11.245844  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:11.745925  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:12.245788  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:12.745757  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:13.245844  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:13.744949  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:14.245762  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:14.745558  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:10.936675  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:11.436272  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:11.937272  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:12.436971  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:12.936377  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:13.435972  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:13.936779  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:14.436521  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:14.936619  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:15.436449  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:12.983555  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:13.483112  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:13.983119  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:14.483571  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:14.983564  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:15.483968  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:15.985107  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:16.482973  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:16.983852  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:17.483706  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:15.246146  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:15.745442  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:16.245478  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:16.745851  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:17.245620  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:17.745179  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:18.245868  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:18.746515  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:19.245146  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:19.746353  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:15.937638  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:16.436053  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:16.936310  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:17.436971  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:17.936846  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:18.436790  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:18.936696  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:19.436200  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:19.936118  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:20.437161  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:17.983286  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:18.483618  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:18.983321  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:19.484098  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:19.982957  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:20.484192  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:20.982797  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:21.483503  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:21.983073  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:22.483344  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:20.246025  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:20.745956  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:21.245670  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:21.745542  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:22.245486  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:22.745743  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:23.246417  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:23.746516  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:24.245958  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:24.746331  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:20.936566  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:21.435804  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:21.936340  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:22.436902  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:22.936275  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:23.437058  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:23.936691  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:24.436512  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:24.936664  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:25.436248  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:22.983164  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:23.483224  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:23.983637  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:24.483793  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:24.983642  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:25.484002  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:25.983546  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:26.483485  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:26.983175  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:27.483045  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:25.246376  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:25.746128  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:26.246162  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:26.747301  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:27.245957  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:27.745993  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:28.245413  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:28.746300  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:29.246016  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:29.745826  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:25.936102  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:26.436787  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:26.936813  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:27.436289  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:27.937146  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:28.437238  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:28.937126  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:29.436740  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:29.936271  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:30.437515  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:27.983720  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:28.484073  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:28.982865  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:29.483679  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:29.983626  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:30.484049  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:30.983790  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:31.483561  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:31.983415  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:32.483614  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:30.245267  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:30.746185  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:31.246095  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:31.746548  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:32.245436  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:32.745151  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:33.246297  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:33.746437  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:34.246245  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:34.746404  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:30.936298  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:31.436831  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:31.936714  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:32.436596  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:32.936067  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:33.436898  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:33.936839  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:34.436572  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:34.936153  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:35.436037  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:32.983547  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:33.483091  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:33.983273  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:34.483523  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:34.982933  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:35.483553  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:35.983907  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:36.484242  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:36.983005  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:37.483666  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:35.245928  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:35.745885  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:36.245358  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:36.747236  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:37.245813  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:37.745544  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:38.245252  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:38.746445  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:39.245380  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:39.746275  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:35.936921  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:36.436862  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:36.936442  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:37.437100  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:37.936746  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:38.436661  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:38.936108  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:39.436741  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:39.937134  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:40.437138  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:37.984072  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:38.483408  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:38.982980  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:39.483839  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:39.983815  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:40.484237  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:40.982748  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:41.483227  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:41.983491  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:42.483502  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:40.246302  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:40.746840  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:41.245743  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:41.745752  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:42.245764  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:42.745565  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:43.245413  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:43.745818  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:44.245622  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:44.746548  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:40.936117  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:41.436793  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:41.937328  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:42.436385  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:42.937184  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:43.437161  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:43.936755  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:44.436384  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:44.937437  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:45.436119  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:42.983935  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:43.483872  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:43.983964  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:44.484354  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:44.983693  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:45.483534  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:45.983273  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:46.483358  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:46.983949  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:47.483681  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:45.245051  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:45.745840  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:46.245710  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:46.747059  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:47.245761  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:47.746224  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:48.245979  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:48.746397  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:49.246462  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:49.745161  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:45.936393  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:46.435574  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:46.936269  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:47.436736  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:47.935923  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:48.436191  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:48.937125  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:49.436724  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:49.936060  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:50.436464  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:47.983875  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:48.483646  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:48.983702  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:49.483743  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:49.983649  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:50.484353  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:50.984025  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:51.484106  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:51.983289  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:52.483003  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:50.245241  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:50.746800  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:51.245636  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:51.745903  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:52.245501  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:52.746786  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:53.245828  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:53.746731  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:54.245243  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:54.746109  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:50.936423  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:51.436185  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:51.937335  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:52.435811  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:52.936607  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:53.437193  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:53.937024  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:54.436703  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:54.936452  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:55.436033  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:52.982921  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:53.483790  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:53.984334  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:54.483331  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:54.983338  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:55.483024  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:55.983619  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:56.483807  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:56.983721  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:57.483219  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:55.245525  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:55.745310  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:56.246066  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:56.748380  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:57.246087  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:57.746200  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:58.246172  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:58.746116  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:59.246000  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:59.745364  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:55.938959  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:56.436375  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:56.936439  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:57.435973  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:57.936388  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:58.435955  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:58.937067  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:59.436689  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:59.936873  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:00.436068  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:57.983216  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:58.483545  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:58.982893  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:59.483703  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:59.983507  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:00.483848  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:00.983741  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:01.483139  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:01.982982  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:02.483474  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:00.245849  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:00.745943  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:01.245514  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:01.745976  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:02.245776  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:02.745774  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:03.246195  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:03.746287  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:04.246174  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:04.745795  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:00.937291  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:01.436517  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:01.937126  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:02.437088  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:02.936378  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:03.435816  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:03.936486  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:04.436861  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:04.936773  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:05.437070  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:02.983196  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:03.482648  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:03.984096  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:04.483607  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:04.983532  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:05.483828  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:05.983686  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:06.484218  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:06.984889  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:07.484117  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:05.245432  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:05.746171  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:06.246148  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:06.746794  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:07.245134  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:07.745858  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:08.245332  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:08.746680  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:09.245744  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:09.745345  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:05.935722  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:06.437147  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:06.937110  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:07.436107  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:07.936683  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:08.437338  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:08.937224  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:09.435895  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:09.936364  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:10.436440  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:07.984241  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:08.483451  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:08.983165  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:09.483042  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:09.982951  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:10.484340  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:10.983004  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:11.483822  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:11.983489  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:12.483877  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:10.246451  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:10.746155  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:11.246021  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:11.745725  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:12.245017  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:12.747153  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:13.246746  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:13.745692  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:14.245869  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:14.745814  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:10.937288  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:11.436218  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:11.937058  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:12.436201  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:12.936942  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:13.436514  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:13.937227  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:14.435900  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:14.937246  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:15.437248  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:12.983685  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:13.483319  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:13.983759  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:14.483672  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:14.983171  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:15.482646  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:15.983174  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:16.483545  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:16.983864  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:17.484102  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:15.245723  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:15.745561  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:16.247817  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:16.747200  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:17.246180  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:17.746059  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:18.245772  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:18.746003  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:19.245769  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:19.745631  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:15.935465  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:16.436710  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:16.936296  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:17.436222  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:17.937015  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:18.437019  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:18.937083  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:19.436796  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:19.936995  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:20.437457  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:17.983571  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:18.483942  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:18.983638  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:19.483595  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:19.982839  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:20.484503  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:20.983773  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:21.483765  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:21.983647  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:22.483706  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:20.246047  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:20.746223  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:21.245764  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:21.746404  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:22.246013  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:22.745963  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:23.245843  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:23.745567  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:24.246427  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:24.746391  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:20.937102  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:21.435564  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:21.936469  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:22.436649  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:22.936880  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:23.436778  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:23.936059  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:24.437189  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:24.937170  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:25.436704  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:22.982868  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:23.484268  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:23.983374  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:24.483212  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:24.983344  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:25.483884  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:25.983398  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:26.484023  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:26.984234  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:27.483988  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:25.246093  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:25.745866  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:26.245647  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:26.747173  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:27.245862  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:27.745538  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:28.245299  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:28.746103  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:29.245506  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:29.746350  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:25.937269  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:26.435729  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:26.936734  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:27.436476  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:27.936918  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:28.436636  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:28.936510  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:29.436255  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:29.936175  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:30.436005  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:27.983312  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:28.484050  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:28.983339  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:29.482531  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:29.982929  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:30.483747  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:30.983500  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:31.482861  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:31.983253  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:32.484296  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:30.245816  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:30.745632  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:31.245311  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:31.746323  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:32.246307  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:32.746634  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:33.245352  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:33.746294  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:34.246399  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:34.746747  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:30.937031  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:31.436676  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:31.936840  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:32.436650  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:32.936793  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:33.436310  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:33.936030  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:34.437178  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:34.937165  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:35.436157  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:32.983447  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:33.484087  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:33.983935  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:34.484195  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:34.982889  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:35.483424  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:35.982827  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:36.483920  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:36.984144  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:37.484302  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:35.245293  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:35.746004  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:36.245793  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:36.746989  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:37.245794  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:37.746839  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:38.245459  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:38.746688  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:39.245861  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:39.745472  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:35.937370  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:36.435903  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:36.936747  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:37.436447  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:37.937054  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:38.437019  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:38.937481  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:39.436189  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:39.936333  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:40.436131  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:37.983136  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:38.484093  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:38.983753  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:39.483392  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:39.983335  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:40.483238  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:40.982643  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:41.483017  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:41.983148  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:42.484213  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:40.245696  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:40.745797  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:41.245831  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:41.745795  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:42.245558  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:42.745449  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:43.246006  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:43.746105  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:44.246305  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:44.746990  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:40.936241  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:41.436869  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:41.936851  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:42.436552  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:42.936544  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:43.436217  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:43.936790  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:44.435881  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:44.937211  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:45.435800  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:42.982609  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:43.483184  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:43.984245  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:44.483444  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:44.983516  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:45.482273  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:45.982784  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:46.483318  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:46.983225  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:47.484299  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:45.245057  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:45.745991  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:46.245705  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:46.746558  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:47.245976  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:47.745649  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:48.245488  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:48.745691  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:49.245062  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:49.745495  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:45.936018  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:46.437324  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:46.936366  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:47.436108  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:47.936330  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:48.435727  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:48.936825  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:49.436120  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:49.937117  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:50.436125  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:47.986039  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:48.483907  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:48.983409  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:49.483362  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:49.982827  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:50.483536  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:50.983035  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:51.483293  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:51.983566  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:52.483534  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:50.246060  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:50.746141  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:51.245517  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:51.745461  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:52.246136  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:52.746190  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:53.246005  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:53.745779  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:54.245690  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:54.746440  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:50.936858  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:51.436399  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:51.936936  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:52.436270  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:52.936040  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:53.436627  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:53.935956  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:54.436964  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:54.937066  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:55.437181  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:52.982975  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:53.483774  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:53.983188  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:54.484313  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:54.983476  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:55.483624  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:55.983235  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:56.484059  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:56.983666  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:57.483836  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:55.246365  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:55.746334  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:56.246033  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:56.746651  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:57.245323  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:57.746357  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:58.245635  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:58.745658  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:59.245395  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:59.745819  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:55.936516  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:56.436483  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:56.936444  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:57.436852  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:57.936892  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:58.436633  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:58.936620  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:59.436269  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:59.936896  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:00.436566  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:57.983297  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:58.484464  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:58.982982  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:59.483511  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:59.982836  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:00.483736  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:00.983424  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:01.483308  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:01.982575  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:02.483472  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:00.245397  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:00.746693  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:01.245417  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:01.745772  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:02.245937  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:02.745980  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:03.245966  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:03.745540  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:04.245125  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:04.746311  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:00.937461  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:01.436345  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:01.937223  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:02.436491  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:02.936542  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:03.436156  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:03.936757  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:04.436434  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:04.936827  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:05.437143  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:02.983140  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:03.483948  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:03.983404  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:04.484135  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:04.983017  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:05.483191  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:05.983258  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:06.483593  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:06.982879  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:07.482719  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:05.245937  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:05.745523  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:06.246156  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:06.746714  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:07.245457  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:07.745845  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:08.245496  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:08.745521  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:09.246211  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:09.745647  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:05.936297  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:06.435928  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:06.936499  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:07.435693  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:07.935885  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:08.436830  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:08.937053  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:09.436174  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:09.936555  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:10.436004  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:07.983540  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:08.483013  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:08.983280  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:09.483326  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:09.983039  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:10.483498  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:10.983057  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:11.483944  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:11.983380  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:12.483057  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:10.246452  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:10.746248  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:11.246124  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:11.746214  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:12.245557  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:12.746434  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:13.245268  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:13.746177  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:14.245924  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:14.747881  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:10.936969  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:11.436315  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:11.936145  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:12.435740  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:12.937011  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:13.437024  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:13.935613  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:14.436125  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:14.937024  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:15.436909  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:12.984340  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:13.483254  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:13.984703  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:14.483313  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:14.982835  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:15.483493  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:15.982869  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:16.483978  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:16.983946  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:17.483204  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:15.245275  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:15.746276  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:16.245920  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:16.746771  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:17.245651  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:17.746110  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:18.245525  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:18.744791  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:19.245637  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:19.745922  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:15.936545  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:16.436043  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:16.937153  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:17.435953  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:17.937080  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:18.435670  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:18.936110  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:19.435657  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:19.935804  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:20.436240  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:17.983897  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:18.483952  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:18.984052  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:19.484088  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:19.983714  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:20.483215  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:20.983277  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:21.483667  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:21.982875  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:22.483370  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:20.245437  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:20.745749  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:21.246263  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:21.746404  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:22.245277  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:22.745620  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:23.245283  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:23.745807  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:24.245525  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:24.745496  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:20.935998  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:21.436702  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:21.936853  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:22.436414  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:22.936508  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:23.435898  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:23.938866  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:24.436406  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:24.936267  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:25.435443  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:22.983387  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:23.483176  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:23.984078  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:24.483842  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:24.983908  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:25.483314  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:25.983685  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:26.482841  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:26.984025  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:27.483709  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:25.246278  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:25.746235  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:26.246283  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:26.746411  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:27.246592  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:27.745927  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:28.245680  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:28.745389  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:29.246386  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:29.745671  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:25.936495  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:26.436178  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:26.937066  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:27.435968  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:27.936852  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:28.436035  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:28.936880  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:29.436057  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:29.936860  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:30.436717  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:27.983478  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:28.483606  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:28.984122  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:29.490050  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:29.982603  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:30.483055  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:30.984015  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:31.483501  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:31.982832  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:32.483241  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:30.245020  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:30.745924  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:31.245930  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:31.745911  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:32.245548  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:32.745201  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:33.245713  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:33.745983  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:34.245893  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:34.745539  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:30.935985  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:31.436747  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:31.936740  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:32.436110  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:32.937088  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:33.436764  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:33.936466  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:34.436043  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:34.936724  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:35.436386  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:32.983173  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:33.483859  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:33.983142  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:34.483205  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:34.984166  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:35.483826  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:35.983185  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:36.484158  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:36.984358  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:37.482832  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:35.246393  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:35.745896  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:36.245850  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:36.746287  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:37.246273  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:37.747864  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:38.245616  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:38.745334  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:39.246449  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:39.744981  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:35.936971  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:36.436804  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:36.936804  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:37.436958  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:37.936877  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:38.436656  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:38.936136  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:39.435670  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:39.935933  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:40.436774  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:37.983744  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:38.482921  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:38.983872  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:39.483540  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:39.984141  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:40.483479  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:40.984063  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:41.483205  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:41.983552  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:42.483481  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:40.245548  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:40.746558  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:41.246611  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:41.745533  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:42.245131  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:42.746326  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:43.246887  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:43.745358  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:44.246189  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:44.745991  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:40.937573  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:41.435677  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:41.936406  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:42.435935  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:42.936714  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:43.436043  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:43.936827  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:44.435885  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:44.936556  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:45.436774  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:42.983487  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:43.483622  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:43.983361  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:44.482912  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:44.983873  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:45.482660  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:45.982839  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:46.483503  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:46.983067  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:47.483638  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:45.245846  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:45.746643  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:46.245931  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:46.746121  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:47.246355  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:47.745777  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:48.245928  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:48.745620  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:49.246014  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:49.745623  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:45.936490  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:46.437169  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:46.936638  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:47.435797  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:47.937106  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:48.436462  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:48.935673  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:49.435921  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:49.936345  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:50.435704  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:47.983064  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:48.483495  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:48.983383  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:49.482815  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:49.983133  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:50.483521  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:50.983458  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:51.483539  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:51.982669  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:52.482740  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:50.245254  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:50.746529  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:51.246403  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:51.746576  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:52.245194  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:52.745901  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:53.245791  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:53.745384  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:54.246056  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:54.745809  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:50.936502  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:51.436533  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:51.936298  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:52.436872  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:52.936965  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:53.436624  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:53.936645  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:54.435868  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:54.936019  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:55.436761  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:52.984260  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:53.483436  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:53.983307  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:54.482909  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:54.983837  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:55.482909  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:55.983703  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:56.483097  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:56.984370  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:57.483476  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:55.245416  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:55.745596  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:56.246315  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:56.746972  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:57.246432  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:57.746169  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:58.245899  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:58.745701  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:59.246684  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:59.746013  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:55.936103  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:56.436731  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:56.936130  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:57.436934  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:57.936650  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:58.435890  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:58.936552  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:59.436324  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:59.936567  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:00.436613  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:57.982857  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:58.483173  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:58.984076  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:59.483622  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:59.983152  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:00.483700  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:00.983409  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:01.483248  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:01.983111  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:02.483698  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:00.245724  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:00.746426  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:01.245360  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:01.746680  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:02.245174  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:02.746009  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:03.246343  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:03.746019  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:04.245779  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:04.745882  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:00.935947  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:01.437327  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:01.937129  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:02.436468  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:02.936473  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:03.436333  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:03.936134  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:04.436385  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:04.937151  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:05.437232  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:02.983942  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:03.483661  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:03.983172  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:04.483536  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:04.983253  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:05.483439  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:05.982645  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:06.483045  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:06.984031  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:07.483303  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:05.245641  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:05.745823  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:06.245494  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:06.746765  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:07.245879  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:07.745869  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:08.245211  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:08.746263  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:09.246504  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:09.744996  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:05.936844  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:06.436478  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:06.935984  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:07.436742  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:07.935862  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:08.436143  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:08.936623  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:09.437102  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:09.936964  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:10.436154  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:07.983001  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:08.483616  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:08.983409  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:09.483478  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:09.982888  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:10.483505  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:10.983487  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:11.482828  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:11.982887  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:12.483514  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:10.245552  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:10.745120  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:11.246143  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:11.746163  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:12.245633  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:12.745368  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:13.246475  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:13.745271  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:14.245933  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:14.745805  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:10.935671  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:11.436335  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:11.936196  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:12.436273  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:12.936625  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:13.436266  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:13.936782  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:14.436189  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:14.936448  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:15.436442  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:12.983418  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:13.483281  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:13.983117  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:14.483767  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:14.984021  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:15.483731  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:15.983275  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:16.483869  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:16.983375  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:17.482882  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:15.245668  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:15.746147  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:16.246640  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:16.746736  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:17.246420  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:17.745966  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:18.246253  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:18.745906  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:19.246303  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:19.745986  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:15.937381  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:16.436018  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:16.936466  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:17.436852  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:17.936227  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:18.437410  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:18.935713  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:19.436449  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:19.935644  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:20.435982  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:17.983311  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:18.483558  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:18.983528  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:19.483170  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:19.984155  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:20.483754  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:20.983412  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:21.483938  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:21.983465  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:22.483020  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:20.245720  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:20.745374  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:21.246756  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:21.745755  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:22.245418  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:22.746818  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:23.245897  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:23.745485  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:24.245161  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:24.746048  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:20.936177  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:21.437209  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:21.936770  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:22.436342  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:22.936061  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:23.436819  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:23.935988  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:24.436564  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:24.935683  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:25.437297  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:22.983512  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:23.484563  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:23.983146  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:24.483790  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:24.983839  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:25.483026  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:25.983149  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:26.484482  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:26.983378  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:27.482721  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:25.246464  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:25.746065  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:26.246367  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:26.746647  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:27.245786  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:27.746272  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:28.245936  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:28.745748  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:29.245512  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:29.745830  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:25.936845  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:26.436141  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:26.937290  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:27.437316  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:27.936601  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:28.435947  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:28.936694  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:29.436517  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:29.936790  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:30.436457  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:27.983105  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:28.483646  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:28.983252  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:29.483908  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:29.983724  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:30.483864  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:30.983787  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:31.483233  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:31.983574  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:32.482995  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:30.245383  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:30.746128  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:31.246198  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:31.746431  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:32.246100  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:32.746119  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:33.246290  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:33.746036  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:34.245863  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:34.745323  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:30.936983  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:31.437037  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:31.936606  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:32.435930  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:32.936507  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:33.436189  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:33.936455  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:34.435839  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:34.935933  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:35.436995  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:32.984129  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:33.484212  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:33.984303  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:34.483306  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:34.983518  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:35.482738  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:35.982612  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:36.483504  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:36.983434  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:37.482971  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:35.246296  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:35.746349  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:36.247475  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:36.746626  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:37.246070  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:37.746520  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:38.245142  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:38.745887  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:39.245695  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:39.745960  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:35.936318  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:36.435767  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:36.936550  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:37.436719  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:37.935917  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:38.435988  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:38.936787  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:39.436849  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:39.935749  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:40.436170  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:37.983100  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:38.483708  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:38.983679  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:39.483567  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:39.983364  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:40.483546  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:40.983622  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:41.484178  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:41.983532  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:42.482768  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:40.245506  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:40.745743  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:41.246985  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:41.746088  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:42.245673  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:42.746257  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:43.246242  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:43.745638  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:44.246113  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:44.745493  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:40.936613  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:41.436769  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:41.937022  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:42.436509  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:42.936170  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:43.436799  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:43.935953  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:44.436096  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:44.936230  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:45.436315  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:42.983678  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:43.482661  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:43.984210  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:44.482755  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:44.983557  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:45.483535  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:45.982947  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:46.483792  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:46.983100  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:47.484233  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:45.246539  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:45.746528  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:46.246174  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:46.746739  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:47.245697  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:47.745790  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:48.245070  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:48.745400  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:49.246339  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:49.745958  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:45.935928  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:46.436394  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:46.936522  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:47.436247  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:47.936524  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:48.436518  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:48.936708  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:49.437978  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:49.936041  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:50.436496  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:47.982762  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:48.483205  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:48.983515  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:49.483024  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:49.982989  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:50.483821  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:50.983511  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:51.482875  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:51.983288  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:52.483464  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:50.245892  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:50.745342  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:51.246251  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:51.746455  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:52.246114  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:52.745679  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:53.243066  568301 kapi.go:81] temporary error: getting Pods with label selector "app.kubernetes.io/name=kubernetes-dashboard-web" : [client rate limiter Wait returned an error: context deadline exceeded]
	I1219 03:10:53.243101  568301 kapi.go:107] duration metric: took 6m0.001125868s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	W1219 03:10:53.243227  568301 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [waiting for app.kubernetes.io/name=kubernetes-dashboard-web pods: context deadline exceeded]
	I1219 03:10:53.244995  568301 out.go:179] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I1219 03:10:53.246175  568301 addons.go:546] duration metric: took 6m5.940868392s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I1219 03:10:53.246216  568301 start.go:247] waiting for cluster config update ...
	I1219 03:10:53.246230  568301 start.go:256] writing updated cluster config ...
	I1219 03:10:53.246533  568301 ssh_runner.go:195] Run: rm -f paused
	I1219 03:10:53.251613  568301 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:10:53.256756  568301 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qmb9z" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:53.261260  568301 pod_ready.go:94] pod "coredns-66bc5c9577-qmb9z" is "Ready"
	I1219 03:10:53.261285  568301 pod_ready.go:86] duration metric: took 4.502294ms for pod "coredns-66bc5c9577-qmb9z" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:53.263432  568301 pod_ready.go:83] waiting for pod "etcd-embed-certs-536489" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:53.267796  568301 pod_ready.go:94] pod "etcd-embed-certs-536489" is "Ready"
	I1219 03:10:53.267819  568301 pod_ready.go:86] duration metric: took 4.363443ms for pod "etcd-embed-certs-536489" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:53.269959  568301 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-536489" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:53.273954  568301 pod_ready.go:94] pod "kube-apiserver-embed-certs-536489" is "Ready"
	I1219 03:10:53.273978  568301 pod_ready.go:86] duration metric: took 3.994974ms for pod "kube-apiserver-embed-certs-536489" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:53.276324  568301 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-536489" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:53.655995  568301 pod_ready.go:94] pod "kube-controller-manager-embed-certs-536489" is "Ready"
	I1219 03:10:53.656024  568301 pod_ready.go:86] duration metric: took 379.67922ms for pod "kube-controller-manager-embed-certs-536489" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:53.856274  568301 pod_ready.go:83] waiting for pod "kube-proxy-qhlhx" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:54.256232  568301 pod_ready.go:94] pod "kube-proxy-qhlhx" is "Ready"
	I1219 03:10:54.256260  568301 pod_ready.go:86] duration metric: took 399.957557ms for pod "kube-proxy-qhlhx" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:54.456456  568301 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-536489" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:54.856278  568301 pod_ready.go:94] pod "kube-scheduler-embed-certs-536489" is "Ready"
	I1219 03:10:54.856307  568301 pod_ready.go:86] duration metric: took 399.821962ms for pod "kube-scheduler-embed-certs-536489" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:54.856318  568301 pod_ready.go:40] duration metric: took 1.60467121s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:10:54.908914  568301 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 03:10:54.910224  568301 out.go:179] * Done! kubectl is now configured to use "embed-certs-536489" cluster and "default" namespace by default
	I1219 03:10:50.936043  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:51.437199  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:51.937554  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:52.436648  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:52.935325  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:53.437090  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:53.936467  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:54.435747  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:54.937514  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:55.437259  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:52.983483  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:53.483110  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:53.984179  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:54.483441  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:54.983571  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:55.482976  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:55.983723  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:56.483799  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:56.983265  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:57.482795  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:57.980094  569947 kapi.go:107] duration metric: took 6m0.000564024s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	W1219 03:10:57.980271  569947 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [waiting for app.kubernetes.io/name=kubernetes-dashboard-web pods: context deadline exceeded]
	I1219 03:10:57.982221  569947 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1219 03:10:57.983556  569947 addons.go:546] duration metric: took 6m7.330731268s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1219 03:10:57.983643  569947 start.go:247] waiting for cluster config update ...
	I1219 03:10:57.983661  569947 start.go:256] writing updated cluster config ...
	I1219 03:10:57.983965  569947 ssh_runner.go:195] Run: rm -f paused
	I1219 03:10:57.988502  569947 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:10:57.993252  569947 pod_ready.go:83] waiting for pod "coredns-7d764666f9-hm5hz" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:57.997922  569947 pod_ready.go:94] pod "coredns-7d764666f9-hm5hz" is "Ready"
	I1219 03:10:57.997946  569947 pod_ready.go:86] duration metric: took 4.66305ms for pod "coredns-7d764666f9-hm5hz" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:58.000317  569947 pod_ready.go:83] waiting for pod "etcd-no-preload-208281" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:58.004843  569947 pod_ready.go:94] pod "etcd-no-preload-208281" is "Ready"
	I1219 03:10:58.004871  569947 pod_ready.go:86] duration metric: took 4.527165ms for pod "etcd-no-preload-208281" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:58.006889  569947 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-208281" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:58.010814  569947 pod_ready.go:94] pod "kube-apiserver-no-preload-208281" is "Ready"
	I1219 03:10:58.010843  569947 pod_ready.go:86] duration metric: took 3.912426ms for pod "kube-apiserver-no-preload-208281" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:58.012893  569947 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-208281" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:58.394606  569947 pod_ready.go:94] pod "kube-controller-manager-no-preload-208281" is "Ready"
	I1219 03:10:58.394643  569947 pod_ready.go:86] duration metric: took 381.720753ms for pod "kube-controller-manager-no-preload-208281" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:58.594310  569947 pod_ready.go:83] waiting for pod "kube-proxy-xst8w" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:58.994002  569947 pod_ready.go:94] pod "kube-proxy-xst8w" is "Ready"
	I1219 03:10:58.994037  569947 pod_ready.go:86] duration metric: took 399.698104ms for pod "kube-proxy-xst8w" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:59.194965  569947 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-208281" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:59.594191  569947 pod_ready.go:94] pod "kube-scheduler-no-preload-208281" is "Ready"
	I1219 03:10:59.594219  569947 pod_ready.go:86] duration metric: took 399.226469ms for pod "kube-scheduler-no-preload-208281" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:59.594230  569947 pod_ready.go:40] duration metric: took 1.605690954s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:10:59.642421  569947 start.go:625] kubectl: 1.35.0, cluster: 1.35.0-rc.1 (minor skew: 0)
	I1219 03:10:59.644674  569947 out.go:179] * Done! kubectl is now configured to use "no-preload-208281" cluster and "default" namespace by default
	I1219 03:10:55.937173  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:56.435825  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:56.936702  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:57.436527  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:57.936442  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:58.436611  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:58.936591  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:59.436321  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:59.937837  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:00.436459  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:00.936639  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:01.437141  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:01.936951  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:02.436292  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:02.936804  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:03.437702  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:03.936237  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:04.436721  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:04.936104  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:05.439639  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:05.936149  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:06.433765  573699 kapi.go:81] temporary error: getting Pods with label selector "app.kubernetes.io/name=kubernetes-dashboard-web" : [client rate limiter Wait returned an error: context deadline exceeded]
	I1219 03:11:06.433806  573699 kapi.go:107] duration metric: took 6m0.001182154s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	W1219 03:11:06.433932  573699 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [waiting for app.kubernetes.io/name=kubernetes-dashboard-web pods: context deadline exceeded]
	I1219 03:11:06.435864  573699 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1219 03:11:06.437280  573699 addons.go:546] duration metric: took 6m7.672932083s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1219 03:11:06.437331  573699 start.go:247] waiting for cluster config update ...
	I1219 03:11:06.437348  573699 start.go:256] writing updated cluster config ...
	I1219 03:11:06.437666  573699 ssh_runner.go:195] Run: rm -f paused
	I1219 03:11:06.441973  573699 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:11:06.446110  573699 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-86vsf" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:06.450837  573699 pod_ready.go:94] pod "coredns-66bc5c9577-86vsf" is "Ready"
	I1219 03:11:06.450868  573699 pod_ready.go:86] duration metric: took 4.729554ms for pod "coredns-66bc5c9577-86vsf" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:06.453222  573699 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-103644" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:06.457430  573699 pod_ready.go:94] pod "etcd-default-k8s-diff-port-103644" is "Ready"
	I1219 03:11:06.457451  573699 pod_ready.go:86] duration metric: took 4.204892ms for pod "etcd-default-k8s-diff-port-103644" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:06.459510  573699 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-103644" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:06.463733  573699 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-103644" is "Ready"
	I1219 03:11:06.463756  573699 pod_ready.go:86] duration metric: took 4.230488ms for pod "kube-apiserver-default-k8s-diff-port-103644" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:06.465771  573699 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-103644" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:06.846433  573699 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-103644" is "Ready"
	I1219 03:11:06.846461  573699 pod_ready.go:86] duration metric: took 380.664307ms for pod "kube-controller-manager-default-k8s-diff-port-103644" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:07.046474  573699 pod_ready.go:83] waiting for pod "kube-proxy-lgw6f" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:07.446485  573699 pod_ready.go:94] pod "kube-proxy-lgw6f" is "Ready"
	I1219 03:11:07.446515  573699 pod_ready.go:86] duration metric: took 400.010893ms for pod "kube-proxy-lgw6f" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:07.647551  573699 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-103644" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:08.046807  573699 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-103644" is "Ready"
	I1219 03:11:08.046840  573699 pod_ready.go:86] duration metric: took 399.227778ms for pod "kube-scheduler-default-k8s-diff-port-103644" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:08.046853  573699 pod_ready.go:40] duration metric: took 1.604833632s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:11:08.095708  573699 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 03:11:08.097778  573699 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-103644" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                       ATTEMPT             POD ID              POD                                              NAMESPACE
	219023786529f       6e38f40d628db       8 minutes ago       Running             storage-provisioner        2                   2d1b5a57c414a       storage-provisioner                              kube-system
	71825ae44f527       a0607af4fcd8a       9 minutes ago       Running             kubernetes-dashboard-api   0                   8cb713fbafe4a       kubernetes-dashboard-api-6fc4d946b9-gd6qk        kubernetes-dashboard
	a7b7fd7bf394e       59f642f485d26       9 minutes ago       Running             kubernetes-dashboard-web   0                   9074df2cb8cd2       kubernetes-dashboard-web-858bd7466-n2wrh         kubernetes-dashboard
	bba7b1d6bc96c       4921d7a6dffa9       9 minutes ago       Running             kindnet-cni                1                   a3585342df01c       kindnet-2hplz                                    kube-system
	b2c2e646ecfba       56cc512116c8f       9 minutes ago       Running             busybox                    1                   04096ed4a349b       busybox                                          default
	863ed2b101014       ead0a4a53df89       9 minutes ago       Running             coredns                    1                   6980262009b2b       coredns-5dd5756b68-l88tx                         kube-system
	27b2e16e5c09e       6e38f40d628db       9 minutes ago       Exited              storage-provisioner        1                   2d1b5a57c414a       storage-provisioner                              kube-system
	cdf87bf1433e7       ea1030da44aa1       9 minutes ago       Running             kube-proxy                 1                   ccf5bdf34f0f4       kube-proxy-666m9                                 kube-system
	ec41efb71d11f       f6f496300a2ae       9 minutes ago       Running             kube-scheduler             1                   3a42d74f17260       kube-scheduler-old-k8s-version-002036            kube-system
	dfe38bb0dfc86       bb5e0dde9054c       9 minutes ago       Running             kube-apiserver             1                   c0672cbc96865       kube-apiserver-old-k8s-version-002036            kube-system
	9b508ac5bcc2f       4be79c38a4bab       9 minutes ago       Running             kube-controller-manager    1                   0ea551da5c7a6       kube-controller-manager-old-k8s-version-002036   kube-system
	b9960e975d031       73deb9a3f7025       9 minutes ago       Running             etcd                       1                   3855b9c551a39       etcd-old-k8s-version-002036                      kube-system
	0910d0e25fb93       56cc512116c8f       10 minutes ago      Exited              busybox                    0                   cbcafd254e45b       busybox                                          default
	f5a6828923844       ead0a4a53df89       10 minutes ago      Exited              coredns                    0                   47f60961f0df9       coredns-5dd5756b68-l88tx                         kube-system
	39c0ef083f8a4       4921d7a6dffa9       10 minutes ago      Exited              kindnet-cni                0                   f1900c4db6e63       kindnet-2hplz                                    kube-system
	8763b9d407817       ea1030da44aa1       10 minutes ago      Exited              kube-proxy                 0                   12bf56a539ab2       kube-proxy-666m9                                 kube-system
	c0f0972814035       73deb9a3f7025       10 minutes ago      Exited              etcd                       0                   8bf3f5e0cb410       etcd-old-k8s-version-002036                      kube-system
	dba3065c9833e       bb5e0dde9054c       10 minutes ago      Exited              kube-apiserver             0                   4996e55587c6d       kube-apiserver-old-k8s-version-002036            kube-system
	4c30550e453e3       f6f496300a2ae       10 minutes ago      Exited              kube-scheduler             0                   fb7c53cd7882d       kube-scheduler-old-k8s-version-002036            kube-system
	42172d3a4d4cb       4be79c38a4bab       10 minutes ago      Exited              kube-controller-manager    0                   3611bff22d3f1       kube-controller-manager-old-k8s-version-002036   kube-system
	
	
	==> containerd <==
	Dec 19 03:14:12 old-k8s-version-002036 containerd[455]: time="2025-12-19T03:14:12.625440946Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3bb4bc745d243699a94e4bd20f4c0b1d.slice/cri-containerd-9b508ac5bcc2f846c1cdb876db92c8775786a89e673d47b06edc47330b0dd92c.scope/hugetlb.1GB.events\""
	Dec 19 03:14:12 old-k8s-version-002036 containerd[455]: time="2025-12-19T03:14:12.626320100Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84de5ff3676b9c1f3bccdf4ad3d42f1e.slice/cri-containerd-dfe38bb0dfc8678344b93cade34ee754a193ec59d80c901088ead56815e08751.scope/hugetlb.2MB.events\""
	Dec 19 03:14:12 old-k8s-version-002036 containerd[455]: time="2025-12-19T03:14:12.626461977Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84de5ff3676b9c1f3bccdf4ad3d42f1e.slice/cri-containerd-dfe38bb0dfc8678344b93cade34ee754a193ec59d80c901088ead56815e08751.scope/hugetlb.1GB.events\""
	Dec 19 03:14:22 old-k8s-version-002036 containerd[455]: time="2025-12-19T03:14:22.641168155Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3bb4bc745d243699a94e4bd20f4c0b1d.slice/cri-containerd-9b508ac5bcc2f846c1cdb876db92c8775786a89e673d47b06edc47330b0dd92c.scope/hugetlb.2MB.events\""
	Dec 19 03:14:22 old-k8s-version-002036 containerd[455]: time="2025-12-19T03:14:22.641285275Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3bb4bc745d243699a94e4bd20f4c0b1d.slice/cri-containerd-9b508ac5bcc2f846c1cdb876db92c8775786a89e673d47b06edc47330b0dd92c.scope/hugetlb.1GB.events\""
	Dec 19 03:14:22 old-k8s-version-002036 containerd[455]: time="2025-12-19T03:14:22.642220801Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84de5ff3676b9c1f3bccdf4ad3d42f1e.slice/cri-containerd-dfe38bb0dfc8678344b93cade34ee754a193ec59d80c901088ead56815e08751.scope/hugetlb.2MB.events\""
	Dec 19 03:14:22 old-k8s-version-002036 containerd[455]: time="2025-12-19T03:14:22.642338581Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84de5ff3676b9c1f3bccdf4ad3d42f1e.slice/cri-containerd-dfe38bb0dfc8678344b93cade34ee754a193ec59d80c901088ead56815e08751.scope/hugetlb.1GB.events\""
	Dec 19 03:14:22 old-k8s-version-002036 containerd[455]: time="2025-12-19T03:14:22.643444701Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7074ad061d5bac3fab5e1d923113a3f.slice/cri-containerd-ec41efb71d11f10b0b94642489d3834fdc3d5928e6b0c2b8ffff7125bd7af0b5.scope/hugetlb.2MB.events\""
	Dec 19 03:14:22 old-k8s-version-002036 containerd[455]: time="2025-12-19T03:14:22.643564004Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7074ad061d5bac3fab5e1d923113a3f.slice/cri-containerd-ec41efb71d11f10b0b94642489d3834fdc3d5928e6b0c2b8ffff7125bd7af0b5.scope/hugetlb.1GB.events\""
	Dec 19 03:14:22 old-k8s-version-002036 containerd[455]: time="2025-12-19T03:14:22.644486406Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9de888c7_fb93_4e9d_a535_31b7b29f921f.slice/cri-containerd-b2c2e646ecfbacbe13a3967da43bdd9ad22aa9e8731ab55e1c95e55fa24c45eb.scope/hugetlb.2MB.events\""
	Dec 19 03:14:22 old-k8s-version-002036 containerd[455]: time="2025-12-19T03:14:22.644639024Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9de888c7_fb93_4e9d_a535_31b7b29f921f.slice/cri-containerd-b2c2e646ecfbacbe13a3967da43bdd9ad22aa9e8731ab55e1c95e55fa24c45eb.scope/hugetlb.1GB.events\""
	Dec 19 03:14:22 old-k8s-version-002036 containerd[455]: time="2025-12-19T03:14:22.645407564Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-pod39d24a02_b01d_42e2_91a9_afcbe4369262.slice/cri-containerd-bba7b1d6bc96c331ddb22fa76f1a84d6155438b0895d2c1747dc5fba25b38401.scope/hugetlb.2MB.events\""
	Dec 19 03:14:22 old-k8s-version-002036 containerd[455]: time="2025-12-19T03:14:22.645488906Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-pod39d24a02_b01d_42e2_91a9_afcbe4369262.slice/cri-containerd-bba7b1d6bc96c331ddb22fa76f1a84d6155438b0895d2c1747dc5fba25b38401.scope/hugetlb.1GB.events\""
	Dec 19 03:14:22 old-k8s-version-002036 containerd[455]: time="2025-12-19T03:14:22.646380557Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcdeeabb6_d6bd_4c14_88ae_4a3b2cb95017.slice/cri-containerd-71825ae44f5277e1ab0659c4cf232265a66e3271a0ea4220f8f56d30ed22a8b1.scope/hugetlb.2MB.events\""
	Dec 19 03:14:22 old-k8s-version-002036 containerd[455]: time="2025-12-19T03:14:22.646506096Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcdeeabb6_d6bd_4c14_88ae_4a3b2cb95017.slice/cri-containerd-71825ae44f5277e1ab0659c4cf232265a66e3271a0ea4220f8f56d30ed22a8b1.scope/hugetlb.1GB.events\""
	Dec 19 03:14:22 old-k8s-version-002036 containerd[455]: time="2025-12-19T03:14:22.647422663Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5b372da_a545_4eb0_a787_a765babe3092.slice/cri-containerd-219023786529f0d2b2e8db1c37d04dd25946c1f17c1199c8669d4d942666f005.scope/hugetlb.2MB.events\""
	Dec 19 03:14:22 old-k8s-version-002036 containerd[455]: time="2025-12-19T03:14:22.647548141Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5b372da_a545_4eb0_a787_a765babe3092.slice/cri-containerd-219023786529f0d2b2e8db1c37d04dd25946c1f17c1199c8669d4d942666f005.scope/hugetlb.1GB.events\""
	Dec 19 03:14:22 old-k8s-version-002036 containerd[455]: time="2025-12-19T03:14:22.648415220Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod261654f50ea014ec6080d0b3394c8bcf.slice/cri-containerd-b9960e975d03179064f3d3dbc5a7f50353ebbf7f7387cd0e71d6d953f2052787.scope/hugetlb.2MB.events\""
	Dec 19 03:14:22 old-k8s-version-002036 containerd[455]: time="2025-12-19T03:14:22.648525789Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod261654f50ea014ec6080d0b3394c8bcf.slice/cri-containerd-b9960e975d03179064f3d3dbc5a7f50353ebbf7f7387cd0e71d6d953f2052787.scope/hugetlb.1GB.events\""
	Dec 19 03:14:22 old-k8s-version-002036 containerd[455]: time="2025-12-19T03:14:22.649276047Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode51ba474_bbbd_4293_b5ef_480a0266f436.slice/cri-containerd-863ed2b101014d0d94e2da07371bcbcbadfc937c4d27e332e7b9c083babe32b7.scope/hugetlb.2MB.events\""
	Dec 19 03:14:22 old-k8s-version-002036 containerd[455]: time="2025-12-19T03:14:22.649382445Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode51ba474_bbbd_4293_b5ef_480a0266f436.slice/cri-containerd-863ed2b101014d0d94e2da07371bcbcbadfc937c4d27e332e7b9c083babe32b7.scope/hugetlb.1GB.events\""
	Dec 19 03:14:22 old-k8s-version-002036 containerd[455]: time="2025-12-19T03:14:22.650049829Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98b59ee1_673b_4dbe_bc2c_d2ff2e3a620c.slice/cri-containerd-cdf87bf1433e7c2e0dae2c3a75335eb849fc8e2aa686dccdc9a6dbcf45ed6f7b.scope/hugetlb.2MB.events\""
	Dec 19 03:14:22 old-k8s-version-002036 containerd[455]: time="2025-12-19T03:14:22.650153812Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98b59ee1_673b_4dbe_bc2c_d2ff2e3a620c.slice/cri-containerd-cdf87bf1433e7c2e0dae2c3a75335eb849fc8e2aa686dccdc9a6dbcf45ed6f7b.scope/hugetlb.1GB.events\""
	Dec 19 03:14:22 old-k8s-version-002036 containerd[455]: time="2025-12-19T03:14:22.650962269Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dd007d2_8341_49d4_8b6d_8f799c794abf.slice/cri-containerd-a7b7fd7bf394e74ab791d76919b0a3eeaa8297034b785789903fd48bb69b157a.scope/hugetlb.2MB.events\""
	Dec 19 03:14:22 old-k8s-version-002036 containerd[455]: time="2025-12-19T03:14:22.651050967Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dd007d2_8341_49d4_8b6d_8f799c794abf.slice/cri-containerd-a7b7fd7bf394e74ab791d76919b0a3eeaa8297034b785789903fd48bb69b157a.scope/hugetlb.1GB.events\""
	
	
	==> coredns [863ed2b101014d0d94e2da07371bcbcbadfc937c4d27e332e7b9c083babe32b7] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:41296 - 26157 "HINFO IN 1571955820553979720.298796393754182401. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.051572035s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [f5a6828923844354aa75ffc6f9543fa3041f3bf3b66c134daad8384cab76bf5e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:40471 - 45478 "HINFO IN 4731433819679745007.4400201330808537038. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.028889842s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-002036
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-002036
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=old-k8s-version-002036
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T03_03_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 03:03:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-002036
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 03:14:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 03:10:21 +0000   Fri, 19 Dec 2025 03:03:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 03:10:21 +0000   Fri, 19 Dec 2025 03:03:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 03:10:21 +0000   Fri, 19 Dec 2025 03:03:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 03:10:21 +0000   Fri, 19 Dec 2025 03:04:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-002036
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                09e77726-e089-467e-8671-1211ba943cda
	  Boot ID:                    a0dec9bb-d63c-4dc5-9036-bbcaf9f2c6be
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.2.0
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-5dd5756b68-l88tx                                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     10m
	  kube-system                 etcd-old-k8s-version-002036                              100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10m
	  kube-system                 kindnet-2hplz                                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-old-k8s-version-002036                    250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-old-k8s-version-002036           200m (2%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-666m9                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-old-k8s-version-002036                    100m (1%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 metrics-server-57f55c9bc5-jjqwh                          100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         10m
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-api-6fc4d946b9-gd6qk                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     9m27s
	  kubernetes-dashboard        kubernetes-dashboard-auth-745d5d46bb-rkfcv               100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     9m27s
	  kubernetes-dashboard        kubernetes-dashboard-kong-f487b85cd-9xprh                0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m27s
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-6b5c7dc479-4krct    100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     9m27s
	  kubernetes-dashboard        kubernetes-dashboard-web-858bd7466-n2wrh                 100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     9m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (16%)  1100m (13%)
	  memory             1220Mi (3%)  1820Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 9m38s                  kube-proxy       
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x3 over 10m)      kubelet          Node old-k8s-version-002036 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)      kubelet          Node old-k8s-version-002036 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)      kubelet          Node old-k8s-version-002036 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node old-k8s-version-002036 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node old-k8s-version-002036 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node old-k8s-version-002036 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                    node-controller  Node old-k8s-version-002036 event: Registered Node old-k8s-version-002036 in Controller
	  Normal  NodeReady                10m                    kubelet          Node old-k8s-version-002036 status is now: NodeReady
	  Normal  Starting                 9m44s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m43s (x9 over 9m44s)  kubelet          Node old-k8s-version-002036 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m43s (x7 over 9m44s)  kubelet          Node old-k8s-version-002036 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m43s (x7 over 9m44s)  kubelet          Node old-k8s-version-002036 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m27s                  node-controller  Node old-k8s-version-002036 event: Registered Node old-k8s-version-002036 in Controller
	
	
	==> dmesg <==
	[Dec19 01:17] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001886] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.395482] i8042: Warning: Keylock active
	[  +0.012710] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497460] block sda: the capability attribute has been deprecated.
	[  +0.080392] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.020963] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.499240] kauditd_printk_skb: 47 callbacks suppressed
	[Dec19 03:03] overlayfs: failed to resolve '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/29/fs': -2
	
	
	==> etcd [b9960e975d03179064f3d3dbc5a7f50353ebbf7f7387cd0e71d6d953f2052787] <==
	{"level":"info","ts":"2025-12-19T03:04:41.730296Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-19T03:04:41.730507Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-12-19T03:04:41.730542Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2025-12-19T03:04:41.730817Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2025-12-19T03:04:41.731096Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-19T03:04:41.731201Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-19T03:04:41.733296Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-19T03:04:41.733393Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-19T03:04:41.733428Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-19T03:04:41.733572Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-19T03:04:41.73396Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-19T03:04:43.62157Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-19T03:04:43.621631Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-19T03:04:43.621673Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-19T03:04:43.621686Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-12-19T03:04:43.621696Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-19T03:04:43.621704Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-12-19T03:04:43.621714Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-19T03:04:43.623118Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-002036 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-19T03:04:43.623141Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-19T03:04:43.623183Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-19T03:04:43.62338Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-19T03:04:43.623407Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-19T03:04:43.625639Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-19T03:04:43.625642Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	
	
	==> etcd [c0f0972814035ffda82727b1fcf47abe8c12064548f484c2c6c7ece1325d5770] <==
	{"level":"info","ts":"2025-12-19T03:03:36.230875Z","caller":"traceutil/trace.go:171","msg":"trace[1916280320] transaction","detail":"{read_only:false; response_revision:24; number_of_response:1; }","duration":"113.321236ms","start":"2025-12-19T03:03:36.117546Z","end":"2025-12-19T03:03:36.230867Z","steps":["trace[1916280320] 'process raft request'  (duration: 112.606237ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:03:36.230943Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.934128ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/extension-apiserver-authentication\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-12-19T03:03:36.23097Z","caller":"traceutil/trace.go:171","msg":"trace[472830767] range","detail":"{range_begin:/registry/configmaps/kube-system/extension-apiserver-authentication; range_end:; response_count:0; response_revision:28; }","duration":"112.960962ms","start":"2025-12-19T03:03:36.117999Z","end":"2025-12-19T03:03:36.23096Z","steps":["trace[472830767] 'agreement among raft nodes before linearized reading'  (duration: 112.918738ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:03:36.399146Z","caller":"traceutil/trace.go:171","msg":"trace[1612208426] transaction","detail":"{read_only:false; response_revision:30; number_of_response:1; }","duration":"162.707672ms","start":"2025-12-19T03:03:36.236391Z","end":"2025-12-19T03:03:36.399099Z","steps":["trace[1612208426] 'process raft request'  (duration: 112.442821ms)","trace[1612208426] 'compare'  (duration: 50.023455ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-19T03:03:36.399196Z","caller":"traceutil/trace.go:171","msg":"trace[58388472] transaction","detail":"{read_only:false; response_revision:36; number_of_response:1; }","duration":"161.685609ms","start":"2025-12-19T03:03:36.237504Z","end":"2025-12-19T03:03:36.399189Z","steps":["trace[58388472] 'process raft request'  (duration: 161.593823ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:03:36.399193Z","caller":"traceutil/trace.go:171","msg":"trace[1619775753] transaction","detail":"{read_only:false; response_revision:35; number_of_response:1; }","duration":"161.656388ms","start":"2025-12-19T03:03:36.237507Z","end":"2025-12-19T03:03:36.399164Z","steps":["trace[1619775753] 'process raft request'  (duration: 161.573032ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:03:36.399151Z","caller":"traceutil/trace.go:171","msg":"trace[912657803] transaction","detail":"{read_only:false; response_revision:37; number_of_response:1; }","duration":"160.974586ms","start":"2025-12-19T03:03:36.238163Z","end":"2025-12-19T03:03:36.399137Z","steps":["trace[912657803] 'process raft request'  (duration: 160.952651ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:03:36.399232Z","caller":"traceutil/trace.go:171","msg":"trace[493132379] transaction","detail":"{read_only:false; response_revision:31; number_of_response:1; }","duration":"162.045713ms","start":"2025-12-19T03:03:36.237166Z","end":"2025-12-19T03:03:36.399212Z","steps":["trace[493132379] 'process raft request'  (duration: 161.814271ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:03:36.399237Z","caller":"traceutil/trace.go:171","msg":"trace[402602698] transaction","detail":"{read_only:false; response_revision:33; number_of_response:1; }","duration":"161.746686ms","start":"2025-12-19T03:03:36.237451Z","end":"2025-12-19T03:03:36.399197Z","steps":["trace[402602698] 'process raft request'  (duration: 161.592106ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:03:36.399302Z","caller":"traceutil/trace.go:171","msg":"trace[1236662667] transaction","detail":"{read_only:false; response_revision:34; number_of_response:1; }","duration":"161.820279ms","start":"2025-12-19T03:03:36.23746Z","end":"2025-12-19T03:03:36.399281Z","steps":["trace[1236662667] 'process raft request'  (duration: 161.60246ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:03:36.399301Z","caller":"traceutil/trace.go:171","msg":"trace[91038073] transaction","detail":"{read_only:false; response_revision:32; number_of_response:1; }","duration":"161.879529ms","start":"2025-12-19T03:03:36.237412Z","end":"2025-12-19T03:03:36.399291Z","steps":["trace[91038073] 'process raft request'  (duration: 161.606202ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:03:36.880457Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"218.765633ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790777148805294 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/flowschemas/probes\" mod_revision:0 > success:<request_put:<key:\"/registry/flowschemas/probes\" value_size:596 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-12-19T03:03:36.880718Z","caller":"traceutil/trace.go:171","msg":"trace[514249475] linearizableReadLoop","detail":"{readStateIndex:50; appliedIndex:48; }","duration":"296.169834ms","start":"2025-12-19T03:03:36.584536Z","end":"2025-12-19T03:03:36.880705Z","steps":["trace[514249475] 'read index received'  (duration: 76.710993ms)","trace[514249475] 'applied index is now lower than readState.Index'  (duration: 219.457993ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-19T03:03:36.880717Z","caller":"traceutil/trace.go:171","msg":"trace[2140422772] transaction","detail":"{read_only:false; response_revision:45; number_of_response:1; }","duration":"428.202523ms","start":"2025-12-19T03:03:36.452492Z","end":"2025-12-19T03:03:36.880694Z","steps":["trace[2140422772] 'process raft request'  (duration: 208.783793ms)","trace[2140422772] 'compare'  (duration: 218.644109ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-19T03:03:36.880764Z","caller":"traceutil/trace.go:171","msg":"trace[2146429374] transaction","detail":"{read_only:false; response_revision:46; number_of_response:1; }","duration":"423.797756ms","start":"2025-12-19T03:03:36.45696Z","end":"2025-12-19T03:03:36.880758Z","steps":["trace[2146429374] 'process raft request'  (duration: 423.60052ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:03:36.880803Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-19T03:03:36.452473Z","time spent":"428.296663ms","remote":"127.0.0.1:55772","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":632,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/flowschemas/probes\" mod_revision:0 > success:<request_put:<key:\"/registry/flowschemas/probes\" value_size:596 >> failure:<>"}
	{"level":"warn","ts":"2025-12-19T03:03:36.880906Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"296.358394ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/kube-system/\" range_end:\"/registry/limitranges/kube-system0\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-12-19T03:03:36.880958Z","caller":"traceutil/trace.go:171","msg":"trace[1343287706] range","detail":"{range_begin:/registry/limitranges/kube-system/; range_end:/registry/limitranges/kube-system0; response_count:0; response_revision:46; }","duration":"296.429709ms","start":"2025-12-19T03:03:36.584515Z","end":"2025-12-19T03:03:36.880945Z","steps":["trace[1343287706] 'agreement among raft nodes before linearized reading'  (duration: 296.28034ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:03:36.881032Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-19T03:03:36.456934Z","time spent":"423.856067ms","remote":"127.0.0.1:55772","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1120,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/flowschemas/system-node-high\" mod_revision:43 > success:<request_put:<key:\"/registry/flowschemas/system-node-high\" value_size:1074 >> failure:<request_range:<key:\"/registry/flowschemas/system-node-high\" > >"}
	{"level":"warn","ts":"2025-12-19T03:03:37.230147Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.376525ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790777148805359 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/system:heapster\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:heapster\" value_size:579 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-12-19T03:03:37.230243Z","caller":"traceutil/trace.go:171","msg":"trace[1162357764] linearizableReadLoop","detail":"{readStateIndex:88; appliedIndex:87; }","duration":"135.883415ms","start":"2025-12-19T03:03:37.094347Z","end":"2025-12-19T03:03:37.23023Z","steps":["trace[1162357764] 'read index received'  (duration: 28.791µs)","trace[1162357764] 'applied index is now lower than readState.Index'  (duration: 135.853656ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-19T03:03:37.230333Z","caller":"traceutil/trace.go:171","msg":"trace[1991665803] transaction","detail":"{read_only:false; response_revision:84; number_of_response:1; }","duration":"209.39897ms","start":"2025-12-19T03:03:37.020904Z","end":"2025-12-19T03:03:37.230303Z","steps":["trace[1991665803] 'process raft request'  (duration: 64.803363ms)","trace[1991665803] 'compare'  (duration: 144.248743ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-19T03:03:37.230375Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.063964ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-12-19T03:03:37.230403Z","caller":"traceutil/trace.go:171","msg":"trace[916164666] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:84; }","duration":"136.098902ms","start":"2025-12-19T03:03:37.094295Z","end":"2025-12-19T03:03:37.230394Z","steps":["trace[916164666] 'agreement among raft nodes before linearized reading'  (duration: 135.974814ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:03:37.420394Z","caller":"traceutil/trace.go:171","msg":"trace[973871162] transaction","detail":"{read_only:false; response_revision:86; number_of_response:1; }","duration":"121.376207ms","start":"2025-12-19T03:03:37.298996Z","end":"2025-12-19T03:03:37.420372Z","steps":["trace[973871162] 'process raft request'  (duration: 49.341965ms)","trace[973871162] 'compare'  (duration: 71.905634ms)"],"step_count":2}
	
	
	==> kernel <==
	 03:14:24 up  1:56,  0 user,  load average: 0.35, 0.97, 5.35
	Linux old-k8s-version-002036 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [39c0ef083f8a43337d9a7d982f0384e5b9fb7dfc8d1288356f134d0cbd5f67dc] <==
	I1219 03:03:57.159084       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1219 03:03:57.159387       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1219 03:03:57.159533       1 main.go:148] setting mtu 1500 for CNI 
	I1219 03:03:57.159559       1 main.go:178] kindnetd IP family: "ipv4"
	I1219 03:03:57.254823       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-19T03:03:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1219 03:03:57.459937       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1219 03:03:57.459962       1 controller.go:381] "Waiting for informer caches to sync"
	I1219 03:03:57.459975       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1219 03:03:57.555262       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1219 03:03:57.760161       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1219 03:03:57.760224       1 metrics.go:72] Registering metrics
	I1219 03:03:57.760279       1 controller.go:711] "Syncing nftables rules"
	I1219 03:04:07.462743       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:04:07.462833       1 main.go:301] handling current node
	I1219 03:04:17.460692       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:04:17.460739       1 main.go:301] handling current node
	
	
	==> kindnet [bba7b1d6bc96c331ddb22fa76f1a84d6155438b0895d2c1747dc5fba25b38401] <==
	I1219 03:12:16.414684       1 main.go:301] handling current node
	I1219 03:12:26.418706       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:12:26.418739       1 main.go:301] handling current node
	I1219 03:12:36.419304       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:12:36.419341       1 main.go:301] handling current node
	I1219 03:12:46.412749       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:12:46.412812       1 main.go:301] handling current node
	I1219 03:12:56.412676       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:12:56.412728       1 main.go:301] handling current node
	I1219 03:13:06.419708       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:13:06.419753       1 main.go:301] handling current node
	I1219 03:13:16.411887       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:13:16.411937       1 main.go:301] handling current node
	I1219 03:13:26.418659       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:13:26.418700       1 main.go:301] handling current node
	I1219 03:13:36.419456       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:13:36.419501       1 main.go:301] handling current node
	I1219 03:13:46.413801       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:13:46.413855       1 main.go:301] handling current node
	I1219 03:13:56.414339       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:13:56.414387       1 main.go:301] handling current node
	I1219 03:14:06.420180       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:14:06.420220       1 main.go:301] handling current node
	I1219 03:14:16.412788       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:14:16.412828       1 main.go:301] handling current node
	
	
	==> kube-apiserver [dba3065c9833e9f4afe46526bd689bf64734d83bf5f365ffb830dec4dcecc528] <==
	I1219 03:03:52.565246       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1219 03:03:52.565297       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1219 03:03:52.616476       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	W1219 03:04:22.106806       1 handler_proxy.go:93] no RequestInfo found in the context
	E1219 03:04:22.106892       1 controller.go:135] adding "v1beta1.metrics.k8s.io" to AggregationController failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1219 03:04:22.107256       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 service unavailable
	I1219 03:04:22.107280       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1219 03:04:22.112388       1 handler_proxy.go:93] no RequestInfo found in the context
	E1219 03:04:22.112459       1 controller.go:143] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E1219 03:04:22.112516       1 handler_proxy.go:137] error resolving kube-system/metrics-server: service "metrics-server" not found
	I1219 03:04:22.112547       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 service unavailable
	I1219 03:04:22.112559       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1219 03:04:22.193411       1 alloc.go:330] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.105.1.59"}
	W1219 03:04:22.204323       1 handler_proxy.go:93] no RequestInfo found in the context
	E1219 03:04:22.204399       1 controller.go:143] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1219 03:04:22.204826       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.105.1.59:443: connect: connection refused
	I1219 03:04:22.204852       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1219 03:04:22.209834       1 handler_proxy.go:93] no RequestInfo found in the context
	E1219 03:04:22.209905       1 controller.go:143] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1219 03:04:22.210310       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.105.1.59:443: connect: connection refused
	I1219 03:04:22.210330       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-apiserver [dfe38bb0dfc8678344b93cade34ee754a193ec59d80c901088ead56815e08751] <==
	E1219 03:09:45.658826       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1219 03:09:45.659951       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1219 03:10:44.599142       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.105.1.59:443: connect: connection refused
	I1219 03:10:44.599167       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1219 03:10:45.659165       1 handler_proxy.go:93] no RequestInfo found in the context
	E1219 03:10:45.659206       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1219 03:10:45.659212       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:10:45.660347       1 handler_proxy.go:93] no RequestInfo found in the context
	E1219 03:10:45.660433       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1219 03:10:45.660451       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1219 03:11:44.599120       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.105.1.59:443: connect: connection refused
	I1219 03:11:44.599142       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1219 03:12:44.598989       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.105.1.59:443: connect: connection refused
	I1219 03:12:44.599024       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1219 03:12:45.660186       1 handler_proxy.go:93] no RequestInfo found in the context
	E1219 03:12:45.660230       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1219 03:12:45.660239       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:12:45.661299       1 handler_proxy.go:93] no RequestInfo found in the context
	E1219 03:12:45.661376       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1219 03:12:45.661387       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1219 03:13:44.599125       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.105.1.59:443: connect: connection refused
	I1219 03:13:44.599145       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-controller-manager [42172d3a4d4cb7aacee226a539d40edb403b90518feb569cc0854014a8a2daf3] <==
	I1219 03:03:52.620521       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1219 03:03:52.821801       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-szgdv"
	I1219 03:03:52.830654       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-l88tx"
	I1219 03:03:52.841643       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="221.202983ms"
	I1219 03:03:52.859450       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="17.736845ms"
	I1219 03:03:52.860040       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="115.174µs"
	I1219 03:03:53.591494       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1219 03:03:53.602218       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-szgdv"
	I1219 03:03:53.611109       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="20.006711ms"
	I1219 03:03:53.627903       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="16.69768ms"
	I1219 03:03:53.628040       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="91.405µs"
	I1219 03:03:53.628172       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="69.457µs"
	I1219 03:04:07.521462       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="98.47µs"
	I1219 03:04:07.542469       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="121.558µs"
	I1219 03:04:08.849563       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="128.629µs"
	I1219 03:04:08.879779       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.360678ms"
	I1219 03:04:08.879910       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="80.84µs"
	I1219 03:04:11.777875       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1219 03:04:22.126978       1 event.go:307] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-57f55c9bc5 to 1"
	I1219 03:04:22.134658       1 event.go:307] "Event occurred" object="kube-system/metrics-server-57f55c9bc5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-57f55c9bc5-jjqwh"
	I1219 03:04:22.140990       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="14.273739ms"
	I1219 03:04:22.151649       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="10.595163ms"
	I1219 03:04:22.151800       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="77.832µs"
	I1219 03:04:22.156676       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="110.922µs"
	I1219 03:04:22.362818       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-controller-manager [9b508ac5bcc2f846c1cdb876db92c8775786a89e673d47b06edc47330b0dd92c] <==
	I1219 03:09:57.736713       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1219 03:10:27.388794       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 03:10:27.745499       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1219 03:10:57.394744       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 03:10:57.753819       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1219 03:11:10.011142       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="146.171µs"
	I1219 03:11:19.012661       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-6b5c7dc479" duration="186.895µs"
	I1219 03:11:24.010950       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-auth-745d5d46bb" duration="178.894µs"
	I1219 03:11:25.009794       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="112.545µs"
	E1219 03:11:27.399940       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 03:11:27.762176       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1219 03:11:34.010851       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-6b5c7dc479" duration="147.957µs"
	I1219 03:11:36.014103       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd" duration="178.624µs"
	I1219 03:11:37.010641       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-auth-745d5d46bb" duration="129.135µs"
	I1219 03:11:51.014262       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd" duration="124.675µs"
	E1219 03:11:57.405901       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 03:11:57.770183       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1219 03:12:27.411290       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 03:12:27.777677       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1219 03:12:57.416718       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 03:12:57.785447       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1219 03:13:27.421547       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 03:13:27.792867       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1219 03:13:57.426737       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 03:13:57.800908       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [8763b9d407817a6fdeefb4eca9030abd4e87157819227695d43c3f1e30e5db56] <==
	I1219 03:03:53.465320       1 server_others.go:69] "Using iptables proxy"
	I1219 03:03:53.480477       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1219 03:03:53.531303       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1219 03:03:53.536677       1 server_others.go:152] "Using iptables Proxier"
	I1219 03:03:53.536723       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1219 03:03:53.536731       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1219 03:03:53.536771       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1219 03:03:53.537131       1 server.go:846] "Version info" version="v1.28.0"
	I1219 03:03:53.537460       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:03:53.538383       1 config.go:188] "Starting service config controller"
	I1219 03:03:53.538469       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1219 03:03:53.538387       1 config.go:97] "Starting endpoint slice config controller"
	I1219 03:03:53.538626       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1219 03:03:53.539146       1 config.go:315] "Starting node config controller"
	I1219 03:03:53.539160       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1219 03:03:53.639550       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1219 03:03:53.639522       1 shared_informer.go:318] Caches are synced for node config
	I1219 03:03:53.639640       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-proxy [cdf87bf1433e7c2e0dae2c3a75335eb849fc8e2aa686dccdc9a6dbcf45ed6f7b] <==
	I1219 03:04:45.727676       1 server_others.go:69] "Using iptables proxy"
	I1219 03:04:45.738908       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1219 03:04:45.786634       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1219 03:04:45.793346       1 server_others.go:152] "Using iptables Proxier"
	I1219 03:04:45.793394       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1219 03:04:45.793400       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1219 03:04:45.793431       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1219 03:04:45.793752       1 server.go:846] "Version info" version="v1.28.0"
	I1219 03:04:45.793825       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:04:45.794528       1 config.go:315] "Starting node config controller"
	I1219 03:04:45.794616       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1219 03:04:45.794895       1 config.go:97] "Starting endpoint slice config controller"
	I1219 03:04:45.794998       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1219 03:04:45.794953       1 config.go:188] "Starting service config controller"
	I1219 03:04:45.795052       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1219 03:04:45.895061       1 shared_informer.go:318] Caches are synced for node config
	I1219 03:04:45.895083       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1219 03:04:45.895109       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [4c30550e453e3bd638fc05cb1fab0a37a8d5dca882572ea1f9f7632acfd2724f] <==
	W1219 03:03:36.966145       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1219 03:03:36.966321       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1219 03:03:36.973470       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1219 03:03:36.973531       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1219 03:03:37.108846       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1219 03:03:37.108884       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1219 03:03:37.148690       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1219 03:03:37.148743       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1219 03:03:37.180898       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1219 03:03:37.180946       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1219 03:03:37.227624       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1219 03:03:37.227665       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1219 03:03:37.250310       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1219 03:03:37.250342       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1219 03:03:37.263181       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1219 03:03:37.263215       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1219 03:03:37.274139       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1219 03:03:37.274174       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1219 03:03:37.281849       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1219 03:03:37.281886       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1219 03:03:37.368720       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1219 03:03:37.368754       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1219 03:03:37.574481       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1219 03:03:37.574555       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I1219 03:03:40.469522       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [ec41efb71d11f10b0b94642489d3834fdc3d5928e6b0c2b8ffff7125bd7af0b5] <==
	I1219 03:04:42.529910       1 serving.go:348] Generated self-signed cert in-memory
	W1219 03:04:44.641558       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1219 03:04:44.641637       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1219 03:04:44.641657       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1219 03:04:44.641667       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1219 03:04:44.669727       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1219 03:04:44.669792       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:04:44.672755       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:04:44.672806       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1219 03:04:44.673809       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1219 03:04:44.673982       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1219 03:04:44.774012       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 19 03:13:08 old-k8s-version-002036 kubelet[593]: E1219 03:13:08.999112     593 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-6b5c7dc479-4krct" podUID="f77a9e0e-0fd2-4ced-bf3d-fb72882b3980"
	Dec 19 03:13:08 old-k8s-version-002036 kubelet[593]: E1219 03:13:08.999121     593 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jjqwh" podUID="63e47cb7-d727-4ce2-89f3-e22c05efecc0"
	Dec 19 03:13:13 old-k8s-version-002036 kubelet[593]: E1219 03:13:13.000221     593 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"clear-stale-pid\" with ImagePullBackOff: \"Back-off pulling image \\\"kong:3.9\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd-9xprh" podUID="c0cf4d53-8fa6-470a-86ca-0c401ffda271"
	Dec 19 03:13:14 old-k8s-version-002036 kubelet[593]: E1219 03:13:14.999796     593 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-auth\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-auth:1.4.0\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-auth-745d5d46bb-rkfcv" podUID="e9c94a56-a570-4cac-9bce-35194c8d5146"
	Dec 19 03:13:21 old-k8s-version-002036 kubelet[593]: E1219 03:13:21.000036     593 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-6b5c7dc479-4krct" podUID="f77a9e0e-0fd2-4ced-bf3d-fb72882b3980"
	Dec 19 03:13:21 old-k8s-version-002036 kubelet[593]: E1219 03:13:21.999176     593 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jjqwh" podUID="63e47cb7-d727-4ce2-89f3-e22c05efecc0"
	Dec 19 03:13:26 old-k8s-version-002036 kubelet[593]: E1219 03:13:26.000017     593 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"clear-stale-pid\" with ImagePullBackOff: \"Back-off pulling image \\\"kong:3.9\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd-9xprh" podUID="c0cf4d53-8fa6-470a-86ca-0c401ffda271"
	Dec 19 03:13:29 old-k8s-version-002036 kubelet[593]: E1219 03:13:29.000044     593 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-auth\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-auth:1.4.0\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-auth-745d5d46bb-rkfcv" podUID="e9c94a56-a570-4cac-9bce-35194c8d5146"
	Dec 19 03:13:35 old-k8s-version-002036 kubelet[593]: E1219 03:13:35.999468     593 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-6b5c7dc479-4krct" podUID="f77a9e0e-0fd2-4ced-bf3d-fb72882b3980"
	Dec 19 03:13:35 old-k8s-version-002036 kubelet[593]: E1219 03:13:35.999540     593 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jjqwh" podUID="63e47cb7-d727-4ce2-89f3-e22c05efecc0"
	Dec 19 03:13:39 old-k8s-version-002036 kubelet[593]: E1219 03:13:39.999784     593 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"clear-stale-pid\" with ImagePullBackOff: \"Back-off pulling image \\\"kong:3.9\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd-9xprh" podUID="c0cf4d53-8fa6-470a-86ca-0c401ffda271"
	Dec 19 03:13:43 old-k8s-version-002036 kubelet[593]: E1219 03:13:43.999550     593 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-auth\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-auth:1.4.0\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-auth-745d5d46bb-rkfcv" podUID="e9c94a56-a570-4cac-9bce-35194c8d5146"
	Dec 19 03:13:46 old-k8s-version-002036 kubelet[593]: E1219 03:13:46.999170     593 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jjqwh" podUID="63e47cb7-d727-4ce2-89f3-e22c05efecc0"
	Dec 19 03:13:50 old-k8s-version-002036 kubelet[593]: E1219 03:13:50.000080     593 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-6b5c7dc479-4krct" podUID="f77a9e0e-0fd2-4ced-bf3d-fb72882b3980"
	Dec 19 03:13:54 old-k8s-version-002036 kubelet[593]: E1219 03:13:54.999613     593 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"clear-stale-pid\" with ImagePullBackOff: \"Back-off pulling image \\\"kong:3.9\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd-9xprh" podUID="c0cf4d53-8fa6-470a-86ca-0c401ffda271"
	Dec 19 03:13:56 old-k8s-version-002036 kubelet[593]: E1219 03:13:56.999568     593 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-auth\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-auth:1.4.0\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-auth-745d5d46bb-rkfcv" podUID="e9c94a56-a570-4cac-9bce-35194c8d5146"
	Dec 19 03:13:58 old-k8s-version-002036 kubelet[593]: E1219 03:13:58.999942     593 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jjqwh" podUID="63e47cb7-d727-4ce2-89f3-e22c05efecc0"
	Dec 19 03:14:02 old-k8s-version-002036 kubelet[593]: E1219 03:14:02.999738     593 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-6b5c7dc479-4krct" podUID="f77a9e0e-0fd2-4ced-bf3d-fb72882b3980"
	Dec 19 03:14:06 old-k8s-version-002036 kubelet[593]: E1219 03:14:06.002109     593 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"clear-stale-pid\" with ImagePullBackOff: \"Back-off pulling image \\\"kong:3.9\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd-9xprh" podUID="c0cf4d53-8fa6-470a-86ca-0c401ffda271"
	Dec 19 03:14:09 old-k8s-version-002036 kubelet[593]: E1219 03:14:09.999510     593 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jjqwh" podUID="63e47cb7-d727-4ce2-89f3-e22c05efecc0"
	Dec 19 03:14:11 old-k8s-version-002036 kubelet[593]: E1219 03:14:11.000336     593 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-auth\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-auth:1.4.0\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-auth-745d5d46bb-rkfcv" podUID="e9c94a56-a570-4cac-9bce-35194c8d5146"
	Dec 19 03:14:14 old-k8s-version-002036 kubelet[593]: E1219 03:14:13.999966     593 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-6b5c7dc479-4krct" podUID="f77a9e0e-0fd2-4ced-bf3d-fb72882b3980"
	Dec 19 03:14:18 old-k8s-version-002036 kubelet[593]: E1219 03:14:18.999411     593 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"clear-stale-pid\" with ImagePullBackOff: \"Back-off pulling image \\\"kong:3.9\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd-9xprh" podUID="c0cf4d53-8fa6-470a-86ca-0c401ffda271"
	Dec 19 03:14:23 old-k8s-version-002036 kubelet[593]: E1219 03:14:23.000178     593 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jjqwh" podUID="63e47cb7-d727-4ce2-89f3-e22c05efecc0"
	Dec 19 03:14:25 old-k8s-version-002036 kubelet[593]: E1219 03:14:25.000275     593 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-auth\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-auth:1.4.0\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-auth-745d5d46bb-rkfcv" podUID="e9c94a56-a570-4cac-9bce-35194c8d5146"
	
	
	==> kubernetes-dashboard [71825ae44f5277e1ab0659c4cf232265a66e3271a0ea4220f8f56d30ed22a8b1] <==
	I1219 03:05:06.987920       1 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1219 03:05:06.987928       1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1219 03:05:06.987935       1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1219 03:05:06.998468       1 main.go:119] "Successful initial request to the apiserver" version="v1.28.0"
	I1219 03:05:06.998510       1 client.go:265] Creating in-cluster Sidecar client
	I1219 03:05:07.078280       1 main.go:96] "Listening and serving on" address="0.0.0.0:8000"
	E1219 03:05:07.079785       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:05:37.083096       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:06:07.087022       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:06:37.089790       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:07:07.093553       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:07:37.096748       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:08:07.099897       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:08:37.103509       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:09:07.107028       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:09:37.110567       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:10:07.113923       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:10:37.117231       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:11:07.121051       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:11:37.124456       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:12:07.127703       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:12:37.130492       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:13:07.134259       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:13:37.138135       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:14:07.142148       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> kubernetes-dashboard [a7b7fd7bf394e74ab791d76919b0a3eeaa8297034b785789903fd48bb69b157a] <==
	I1219 03:05:03.763352       1 main.go:37] "Starting Kubernetes Dashboard Web" version="1.7.0"
	I1219 03:05:03.763415       1 init.go:48] Using in-cluster config
	I1219 03:05:03.763693       1 main.go:57] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> storage-provisioner [219023786529f0d2b2e8db1c37d04dd25946c1f17c1199c8669d4d942666f005] <==
	I1219 03:05:29.083053       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1219 03:05:29.093235       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1219 03:05:29.093289       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1219 03:05:46.492337       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1219 03:05:46.492464       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6a8ba3d9-8dbe-480f-9c35-c4b324977dc6", APIVersion:"v1", ResourceVersion:"825", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-002036_45c5b5d0-fab5-41e7-a9c3-e0b08402b1a5 became leader
	I1219 03:05:46.492521       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-002036_45c5b5d0-fab5-41e7-a9c3-e0b08402b1a5!
	I1219 03:05:46.593295       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-002036_45c5b5d0-fab5-41e7-a9c3-e0b08402b1a5!
	
	
	==> storage-provisioner [27b2e16e5c09e9cff4cce562e7b84a5be956640cf474813346451004c553041c] <==
	I1219 03:04:45.663799       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1219 03:05:15.666834       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-002036 -n old-k8s-version-002036
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-002036 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: metrics-server-57f55c9bc5-jjqwh kubernetes-dashboard-auth-745d5d46bb-rkfcv kubernetes-dashboard-kong-f487b85cd-9xprh kubernetes-dashboard-metrics-scraper-6b5c7dc479-4krct
helpers_test.go:283: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context old-k8s-version-002036 describe pod metrics-server-57f55c9bc5-jjqwh kubernetes-dashboard-auth-745d5d46bb-rkfcv kubernetes-dashboard-kong-f487b85cd-9xprh kubernetes-dashboard-metrics-scraper-6b5c7dc479-4krct
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context old-k8s-version-002036 describe pod metrics-server-57f55c9bc5-jjqwh kubernetes-dashboard-auth-745d5d46bb-rkfcv kubernetes-dashboard-kong-f487b85cd-9xprh kubernetes-dashboard-metrics-scraper-6b5c7dc479-4krct: exit status 1 (70.315662ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-jjqwh" not found
	Error from server (NotFound): pods "kubernetes-dashboard-auth-745d5d46bb-rkfcv" not found
	Error from server (NotFound): pods "kubernetes-dashboard-kong-f487b85cd-9xprh" not found
	Error from server (NotFound): pods "kubernetes-dashboard-metrics-scraper-6b5c7dc479-4krct" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context old-k8s-version-002036 describe pod metrics-server-57f55c9bc5-jjqwh kubernetes-dashboard-auth-745d5d46bb-rkfcv kubernetes-dashboard-kong-f487b85cd-9xprh kubernetes-dashboard-metrics-scraper-6b5c7dc479-4krct: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:338: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-536489 -n embed-certs-536489
start_stop_delete_test.go:272: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-12-19 03:19:55.644970198 +0000 UTC m=+3272.638094402
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-536489
helpers_test.go:244: (dbg) docker inspect embed-certs-536489:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0a12a246db9ea78b9db1ef0e13288ded144ec4a62c92ad45270d3a17a9d87b31",
	        "Created": "2025-12-19T03:03:37.532560338Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 568553,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-19T03:04:40.119931988Z",
	            "FinishedAt": "2025-12-19T03:04:39.135554624Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/0a12a246db9ea78b9db1ef0e13288ded144ec4a62c92ad45270d3a17a9d87b31/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0a12a246db9ea78b9db1ef0e13288ded144ec4a62c92ad45270d3a17a9d87b31/hostname",
	        "HostsPath": "/var/lib/docker/containers/0a12a246db9ea78b9db1ef0e13288ded144ec4a62c92ad45270d3a17a9d87b31/hosts",
	        "LogPath": "/var/lib/docker/containers/0a12a246db9ea78b9db1ef0e13288ded144ec4a62c92ad45270d3a17a9d87b31/0a12a246db9ea78b9db1ef0e13288ded144ec4a62c92ad45270d3a17a9d87b31-json.log",
	        "Name": "/embed-certs-536489",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-536489:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-536489",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0a12a246db9ea78b9db1ef0e13288ded144ec4a62c92ad45270d3a17a9d87b31",
	                "LowerDir": "/var/lib/docker/overlay2/1779675f3732594445db4c9a57aa5e82b9afc86b77057330369723c206eb251e-init/diff:/var/lib/docker/overlay2/68e8325308c9e4650215fd35d4b00e1f54e6ac5929641a1bc8ed2d512448afbd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1779675f3732594445db4c9a57aa5e82b9afc86b77057330369723c206eb251e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1779675f3732594445db4c9a57aa5e82b9afc86b77057330369723c206eb251e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1779675f3732594445db4c9a57aa5e82b9afc86b77057330369723c206eb251e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-536489",
	                "Source": "/var/lib/docker/volumes/embed-certs-536489/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-536489",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-536489",
	                "name.minikube.sigs.k8s.io": "embed-certs-536489",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "62dfef23b82a9174bb43617105479520225e65d2827cb4760f18a3c40bd5051d",
	            "SandboxKey": "/var/run/docker/netns/62dfef23b82a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-536489": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9183f6f1b01c9bf1232449e4edccbafc7ca8c7340f355e79ef181320c71bc1bf",
	                    "EndpointID": "cefbdced7c77040bacd3818765fca7955502ceb83fa96908456634fab1d699c8",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "ee:9d:cc:4a:af:61",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-536489",
	                        "0a12a246db9e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-536489 -n embed-certs-536489
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-536489 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-536489 logs -n 25: (1.554747409s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ delete  │ -p cert-options-967008                                                                                                                                                                                                                              │ cert-options-967008          │ jenkins │ v1.37.0 │ 19 Dec 25 03:03 UTC │ 19 Dec 25 03:03 UTC │
	│ start   │ -p NoKubernetes-821572 --driver=docker  --container-runtime=containerd                                                                                                                                                                              │ NoKubernetes-821572          │ jenkins │ v1.37.0 │ 19 Dec 25 03:03 UTC │ 19 Dec 25 03:03 UTC │
	│ start   │ -p no-preload-208281 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-208281            │ jenkins │ v1.37.0 │ 19 Dec 25 03:03 UTC │ 19 Dec 25 03:04 UTC │
	│ delete  │ -p kubernetes-upgrade-340572                                                                                                                                                                                                                        │ kubernetes-upgrade-340572    │ jenkins │ v1.37.0 │ 19 Dec 25 03:03 UTC │ 19 Dec 25 03:03 UTC │
	│ start   │ -p embed-certs-536489 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                        │ embed-certs-536489           │ jenkins │ v1.37.0 │ 19 Dec 25 03:03 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p NoKubernetes-821572 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-821572          │ jenkins │ v1.37.0 │ 19 Dec 25 03:03 UTC │                     │
	│ delete  │ -p NoKubernetes-821572                                                                                                                                                                                                                              │ NoKubernetes-821572          │ jenkins │ v1.37.0 │ 19 Dec 25 03:03 UTC │ 19 Dec 25 03:03 UTC │
	│ delete  │ -p disable-driver-mounts-443690                                                                                                                                                                                                                     │ disable-driver-mounts-443690 │ jenkins │ v1.37.0 │ 19 Dec 25 03:03 UTC │ 19 Dec 25 03:03 UTC │
	│ start   │ -p default-k8s-diff-port-103644 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-103644 │ jenkins │ v1.37.0 │ 19 Dec 25 03:03 UTC │ 19 Dec 25 03:04 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-002036 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-002036       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ stop    │ -p old-k8s-version-002036 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-002036       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ addons  │ enable metrics-server -p embed-certs-536489 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ embed-certs-536489           │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ stop    │ -p embed-certs-536489 --alsologtostderr -v=3                                                                                                                                                                                                        │ embed-certs-536489           │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ addons  │ enable metrics-server -p no-preload-208281 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-208281            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ stop    │ -p no-preload-208281 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-208281            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-002036 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-002036       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p old-k8s-version-002036 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-002036       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-103644 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ default-k8s-diff-port-103644 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ stop    │ -p default-k8s-diff-port-103644 --alsologtostderr -v=3                                                                                                                                                                                              │ default-k8s-diff-port-103644 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ addons  │ enable dashboard -p embed-certs-536489 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-536489           │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p embed-certs-536489 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                        │ embed-certs-536489           │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:10 UTC │
	│ addons  │ enable dashboard -p no-preload-208281 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-208281            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p no-preload-208281 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-208281            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:10 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-103644 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ default-k8s-diff-port-103644 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p default-k8s-diff-port-103644 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-103644 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:11 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 03:04:50
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 03:04:50.472071  573699 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:04:50.472443  573699 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:04:50.472454  573699 out.go:374] Setting ErrFile to fd 2...
	I1219 03:04:50.472463  573699 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:04:50.473301  573699 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-253859/.minikube/bin
	I1219 03:04:50.474126  573699 out.go:368] Setting JSON to false
	I1219 03:04:50.476304  573699 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6429,"bootTime":1766107061,"procs":363,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 03:04:50.476440  573699 start.go:143] virtualization: kvm guest
	I1219 03:04:50.478144  573699 out.go:179] * [default-k8s-diff-port-103644] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 03:04:50.479945  573699 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 03:04:50.480003  573699 notify.go:221] Checking for updates...
	I1219 03:04:50.482332  573699 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 03:04:50.483901  573699 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-253859/kubeconfig
	I1219 03:04:50.485635  573699 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-253859/.minikube
	I1219 03:04:50.489602  573699 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 03:04:50.493460  573699 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 03:04:48.691145  569947 cli_runner.go:164] Run: docker network inspect no-preload-208281 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1219 03:04:48.711282  569947 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1219 03:04:48.716221  569947 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:04:48.729144  569947 kubeadm.go:884] updating cluster {Name:no-preload-208281 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-208281 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSi
ze:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 03:04:48.729324  569947 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1219 03:04:48.729375  569947 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:04:48.763109  569947 containerd.go:627] all images are preloaded for containerd runtime.
	I1219 03:04:48.763136  569947 cache_images.go:86] Images are preloaded, skipping loading
	I1219 03:04:48.763146  569947 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1219 03:04:48.763264  569947 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-208281 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-208281 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 03:04:48.763347  569947 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1219 03:04:48.796269  569947 cni.go:84] Creating CNI manager for ""
	I1219 03:04:48.796300  569947 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1219 03:04:48.796329  569947 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1219 03:04:48.796369  569947 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-208281 NodeName:no-preload-208281 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 03:04:48.796558  569947 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-208281"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 03:04:48.796669  569947 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1219 03:04:48.808026  569947 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 03:04:48.808102  569947 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 03:04:48.819240  569947 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1219 03:04:48.836384  569947 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1219 03:04:48.852550  569947 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2234 bytes)
	I1219 03:04:48.869275  569947 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1219 03:04:48.873704  569947 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:04:48.886490  569947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:04:48.994443  569947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:04:49.020494  569947 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/no-preload-208281 for IP: 192.168.85.2
	I1219 03:04:49.020518  569947 certs.go:195] generating shared ca certs ...
	I1219 03:04:49.020533  569947 certs.go:227] acquiring lock for ca certs: {Name:mk50e31410087b5c6cdb0986368a8c9100618403 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:04:49.020722  569947 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-253859/.minikube/ca.key
	I1219 03:04:49.020809  569947 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-253859/.minikube/proxy-client-ca.key
	I1219 03:04:49.020826  569947 certs.go:257] generating profile certs ...
	I1219 03:04:49.020975  569947 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/no-preload-208281/client.key
	I1219 03:04:49.021064  569947 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/no-preload-208281/apiserver.key.8f504093
	I1219 03:04:49.021159  569947 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/no-preload-208281/proxy-client.key
	I1219 03:04:49.021324  569947 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/257493.pem (1338 bytes)
	W1219 03:04:49.021373  569947 certs.go:480] ignoring /home/jenkins/minikube-integration/22230-253859/.minikube/certs/257493_empty.pem, impossibly tiny 0 bytes
	I1219 03:04:49.021389  569947 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca-key.pem (1679 bytes)
	I1219 03:04:49.021430  569947 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca.pem (1078 bytes)
	I1219 03:04:49.021457  569947 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/cert.pem (1123 bytes)
	I1219 03:04:49.021480  569947 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/key.pem (1675 bytes)
	I1219 03:04:49.021525  569947 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/files/etc/ssl/certs/2574932.pem (1708 bytes)
	I1219 03:04:49.022292  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 03:04:49.050958  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 03:04:49.072475  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 03:04:49.095867  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1219 03:04:49.124289  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/no-preload-208281/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1219 03:04:49.150664  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/no-preload-208281/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1219 03:04:49.188239  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/no-preload-208281/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 03:04:49.216791  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/no-preload-208281/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1219 03:04:49.242767  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 03:04:49.264732  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/certs/257493.pem --> /usr/share/ca-certificates/257493.pem (1338 bytes)
	I1219 03:04:49.286635  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/files/etc/ssl/certs/2574932.pem --> /usr/share/ca-certificates/2574932.pem (1708 bytes)
	I1219 03:04:49.313716  569947 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 03:04:49.329405  569947 ssh_runner.go:195] Run: openssl version
	I1219 03:04:49.337082  569947 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:04:49.347002  569947 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 03:04:49.355979  569947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:04:49.360975  569947 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:26 /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:04:49.361048  569947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:04:49.457547  569947 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 03:04:49.470846  569947 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/257493.pem
	I1219 03:04:49.484764  569947 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/257493.pem /etc/ssl/certs/257493.pem
	I1219 03:04:49.501564  569947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/257493.pem
	I1219 03:04:49.510435  569947 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 19 02:34 /usr/share/ca-certificates/257493.pem
	I1219 03:04:49.510523  569947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/257493.pem
	I1219 03:04:49.583657  569947 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1219 03:04:49.596341  569947 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2574932.pem
	I1219 03:04:49.615267  569947 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2574932.pem /etc/ssl/certs/2574932.pem
	I1219 03:04:49.637741  569947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2574932.pem
	I1219 03:04:49.651506  569947 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 19 02:34 /usr/share/ca-certificates/2574932.pem
	I1219 03:04:49.651606  569947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2574932.pem
	I1219 03:04:49.719393  569947 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1219 03:04:49.738446  569947 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 03:04:49.759885  569947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1219 03:04:49.839963  569947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1219 03:04:49.916940  569947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1219 03:04:49.984478  569947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1219 03:04:50.052790  569947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1219 03:04:50.213057  569947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1219 03:04:50.323267  569947 kubeadm.go:401] StartCluster: {Name:no-preload-208281 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-208281 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:04:50.323602  569947 cri.go:57] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1219 03:04:50.323919  569947 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:04:50.475134  569947 cri.go:92] found id: "cacf5e35e790ac46df5f1339f8e234ed8ffd764c954e85131b71bce9fc1293aa"
	I1219 03:04:50.475159  569947 cri.go:92] found id: "fe48441f2b926fe3ab51c21e04ab3ca395718e082d30ebb97fb87169ccd0e569"
	I1219 03:04:50.475166  569947 cri.go:92] found id: "e2b6de3f6ca9fb6a4cf59ba73f641b81403ff210fb542805c0ecfb406f229a1a"
	I1219 03:04:50.475171  569947 cri.go:92] found id: "496cc0b515ef5f8631302beefecc3380ae6563f4ed57ee1a286e52a06f30249c"
	I1219 03:04:50.475175  569947 cri.go:92] found id: "0cbaba368082a3f121bc09e60595d1ff592ec5796ecc2115579e6f149ade94d7"
	I1219 03:04:50.475180  569947 cri.go:92] found id: "1b139b90f72cc73cf0a391fb1b6dde88df245b3d92b6a686104996e14c38330c"
	I1219 03:04:50.475184  569947 cri.go:92] found id: "6bee3b8cfdfc0cfbd9e189118939b5349e2c5e27938c7584f8e1081b62329aa5"
	I1219 03:04:50.475506  569947 cri.go:92] found id: "6647bd08b2c7d47c6eead48e5e683e5121b40b05c0fe31a31cf2329b794cf45e"
	I1219 03:04:50.475532  569947 cri.go:92] found id: "0457ac1d0e6da6c1cc69c3583af9c20d14f2274fefa0497dcdce311fa1b7a1d9"
	I1219 03:04:50.475549  569947 cri.go:92] found id: "7dd5f1a15d9551e729b3a73b8dd9b36565eb7c597d451995076e1f6606444459"
	I1219 03:04:50.475553  569947 cri.go:92] found id: "06cb2742e807f0ab357f122a308cc5bad433f366487ddbbf67177cc3d6f74e2b"
	I1219 03:04:50.475557  569947 cri.go:92] found id: "ee999ba4f0b47eadf10730be5384ab5f3b45f01128186cb9ab42b8df5c0b7400"
	I1219 03:04:50.475562  569947 cri.go:92] found id: ""
	I1219 03:04:50.475632  569947 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1219 03:04:50.558499  569947 cri.go:119] JSON = [{"ociVersion":"1.2.1","id":"2379cbb88b4433650ade336b6320fe20a8e4907825957318999810a514165e6e","pid":805,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2379cbb88b4433650ade336b6320fe20a8e4907825957318999810a514165e6e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2379cbb88b4433650ade336b6320fe20a8e4907825957318999810a514165e6e/rootfs","created":"2025-12-19T03:04:49.720787385Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"2379cbb88b4433650ade336b6320fe20a8e4907825957318999810a514165e6e","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-no-preload-208281_355754afcd0ce2d7bab6c853c60e836c","io.kubernetes.cri.sandbox-memor
y":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-no-preload-208281","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"355754afcd0ce2d7bab6c853c60e836c"},"owner":"root"},{"ociVersion":"1.2.1","id":"38931718ec045783470e9a1d913a9a9c05077db3a90703385f4fb4a1667664f2","pid":857,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/38931718ec045783470e9a1d913a9a9c05077db3a90703385f4fb4a1667664f2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/38931718ec045783470e9a1d913a9a9c05077db3a90703385f4fb4a1667664f2/rootfs","created":"2025-12-19T03:04:49.778097457Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"38931718ec045783470e9a1d913a9a9c05077db3a90703385f4fb4a1667664f2","io.kubernetes.c
ri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-no-preload-208281_e43ae2e7891eaa1ff806e636f311fb81","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-no-preload-208281","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"e43ae2e7891eaa1ff806e636f311fb81"},"owner":"root"},{"ociVersion":"1.2.1","id":"46efefa83a3c7ef9fc0acf51455ccd0f9b6e6fce80a57e43de82b11915e2ee07","pid":838,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/46efefa83a3c7ef9fc0acf51455ccd0f9b6e6fce80a57e43de82b11915e2ee07","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/46efefa83a3c7ef9fc0acf51455ccd0f9b6e6fce80a57e43de82b11915e2ee07/rootfs","created":"2025-12-19T03:04:49.777265025Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kub
ernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"46efefa83a3c7ef9fc0acf51455ccd0f9b6e6fce80a57e43de82b11915e2ee07","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-no-preload-208281_80442131b1359e6657f2959b40f80467","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-no-preload-208281","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"80442131b1359e6657f2959b40f80467"},"owner":"root"},{"ociVersion":"1.2.1","id":"496cc0b515ef5f8631302beefecc3380ae6563f4ed57ee1a286e52a06f30249c","pid":902,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/496cc0b515ef5f8631302beefecc3380ae6563f4ed57ee1a286e52a06f30249c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/496cc0b515ef5f8631302beefecc3380ae6563f4ed57ee1a286e52a06f30249c/rootfs","created":"2025-12-19T03:04:49.944110218Z","annotations":{"io.kubernetes.cri.container-name":"kube-apis
erver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.35.0-rc.1","io.kubernetes.cri.sandbox-id":"2379cbb88b4433650ade336b6320fe20a8e4907825957318999810a514165e6e","io.kubernetes.cri.sandbox-name":"kube-apiserver-no-preload-208281","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"355754afcd0ce2d7bab6c853c60e836c"},"owner":"root"},{"ociVersion":"1.2.1","id":"7d8e57ec3badf4087436cc732a67ce490fa8a67cdaa8f122d22d1d2973dd4db3","pid":845,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7d8e57ec3badf4087436cc732a67ce490fa8a67cdaa8f122d22d1d2973dd4db3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7d8e57ec3badf4087436cc732a67ce490fa8a67cdaa8f122d22d1d2973dd4db3/rootfs","created":"2025-12-19T03:04:49.76636358Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-c
pu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"7d8e57ec3badf4087436cc732a67ce490fa8a67cdaa8f122d22d1d2973dd4db3","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-no-preload-208281_93a9992ff7a9c41e489b493737b5b488","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-no-preload-208281","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"93a9992ff7a9c41e489b493737b5b488"},"owner":"root"},{"ociVersion":"1.2.1","id":"cacf5e35e790ac46df5f1339f8e234ed8ffd764c954e85131b71bce9fc1293aa","pid":964,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cacf5e35e790ac46df5f1339f8e234ed8ffd764c954e85131b71bce9fc1293aa","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cacf5e35e790ac46df5f1339f8e234ed8ffd764c954e85131b71bce9fc1293aa/rootfs","created":"2025-12-19T03:04:50.065275653Z","annotations":{"io.kubernetes
.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.35.0-rc.1","io.kubernetes.cri.sandbox-id":"38931718ec045783470e9a1d913a9a9c05077db3a90703385f4fb4a1667664f2","io.kubernetes.cri.sandbox-name":"kube-scheduler-no-preload-208281","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"e43ae2e7891eaa1ff806e636f311fb81"},"owner":"root"},{"ociVersion":"1.2.1","id":"e2b6de3f6ca9fb6a4cf59ba73f641b81403ff210fb542805c0ecfb406f229a1a","pid":928,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e2b6de3f6ca9fb6a4cf59ba73f641b81403ff210fb542805c0ecfb406f229a1a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e2b6de3f6ca9fb6a4cf59ba73f641b81403ff210fb542805c0ecfb406f229a1a/rootfs","created":"2025-12-19T03:04:50.024946214Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-
name":"registry.k8s.io/etcd:3.6.6-0","io.kubernetes.cri.sandbox-id":"7d8e57ec3badf4087436cc732a67ce490fa8a67cdaa8f122d22d1d2973dd4db3","io.kubernetes.cri.sandbox-name":"etcd-no-preload-208281","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"93a9992ff7a9c41e489b493737b5b488"},"owner":"root"},{"ociVersion":"1.2.1","id":"fe48441f2b926fe3ab51c21e04ab3ca395718e082d30ebb97fb87169ccd0e569","pid":979,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe48441f2b926fe3ab51c21e04ab3ca395718e082d30ebb97fb87169ccd0e569","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe48441f2b926fe3ab51c21e04ab3ca395718e082d30ebb97fb87169ccd0e569/rootfs","created":"2025-12-19T03:04:50.153274168Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.35.0-rc.1","io.kubernetes.cri.sandbox-id":"46efefa83a3c7ef9fc0acf5
1455ccd0f9b6e6fce80a57e43de82b11915e2ee07","io.kubernetes.cri.sandbox-name":"kube-controller-manager-no-preload-208281","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"80442131b1359e6657f2959b40f80467"},"owner":"root"}]
	I1219 03:04:50.559253  569947 cri.go:129] list returned 8 containers
	I1219 03:04:50.559288  569947 cri.go:132] container: {ID:2379cbb88b4433650ade336b6320fe20a8e4907825957318999810a514165e6e Status:running}
	I1219 03:04:50.559310  569947 cri.go:134] skipping 2379cbb88b4433650ade336b6320fe20a8e4907825957318999810a514165e6e - not in ps
	I1219 03:04:50.559318  569947 cri.go:132] container: {ID:38931718ec045783470e9a1d913a9a9c05077db3a90703385f4fb4a1667664f2 Status:running}
	I1219 03:04:50.559326  569947 cri.go:134] skipping 38931718ec045783470e9a1d913a9a9c05077db3a90703385f4fb4a1667664f2 - not in ps
	I1219 03:04:50.559332  569947 cri.go:132] container: {ID:46efefa83a3c7ef9fc0acf51455ccd0f9b6e6fce80a57e43de82b11915e2ee07 Status:running}
	I1219 03:04:50.559338  569947 cri.go:134] skipping 46efefa83a3c7ef9fc0acf51455ccd0f9b6e6fce80a57e43de82b11915e2ee07 - not in ps
	I1219 03:04:50.559343  569947 cri.go:132] container: {ID:496cc0b515ef5f8631302beefecc3380ae6563f4ed57ee1a286e52a06f30249c Status:running}
	I1219 03:04:50.559363  569947 cri.go:138] skipping {496cc0b515ef5f8631302beefecc3380ae6563f4ed57ee1a286e52a06f30249c running}: state = "running", want "paused"
	I1219 03:04:50.559373  569947 cri.go:132] container: {ID:7d8e57ec3badf4087436cc732a67ce490fa8a67cdaa8f122d22d1d2973dd4db3 Status:running}
	I1219 03:04:50.559381  569947 cri.go:134] skipping 7d8e57ec3badf4087436cc732a67ce490fa8a67cdaa8f122d22d1d2973dd4db3 - not in ps
	I1219 03:04:50.559386  569947 cri.go:132] container: {ID:cacf5e35e790ac46df5f1339f8e234ed8ffd764c954e85131b71bce9fc1293aa Status:running}
	I1219 03:04:50.559393  569947 cri.go:138] skipping {cacf5e35e790ac46df5f1339f8e234ed8ffd764c954e85131b71bce9fc1293aa running}: state = "running", want "paused"
	I1219 03:04:50.559400  569947 cri.go:132] container: {ID:e2b6de3f6ca9fb6a4cf59ba73f641b81403ff210fb542805c0ecfb406f229a1a Status:running}
	I1219 03:04:50.559406  569947 cri.go:138] skipping {e2b6de3f6ca9fb6a4cf59ba73f641b81403ff210fb542805c0ecfb406f229a1a running}: state = "running", want "paused"
	I1219 03:04:50.559412  569947 cri.go:132] container: {ID:fe48441f2b926fe3ab51c21e04ab3ca395718e082d30ebb97fb87169ccd0e569 Status:running}
	I1219 03:04:50.559419  569947 cri.go:138] skipping {fe48441f2b926fe3ab51c21e04ab3ca395718e082d30ebb97fb87169ccd0e569 running}: state = "running", want "paused"
	I1219 03:04:50.559472  569947 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1219 03:04:50.576564  569947 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1219 03:04:50.576683  569947 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1219 03:04:50.576777  569947 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1219 03:04:50.600225  569947 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1219 03:04:50.601759  569947 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-208281" does not appear in /home/jenkins/minikube-integration/22230-253859/kubeconfig
	I1219 03:04:50.605721  569947 kubeconfig.go:62] /home/jenkins/minikube-integration/22230-253859/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-208281" cluster setting kubeconfig missing "no-preload-208281" context setting]
	I1219 03:04:50.610686  569947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-253859/kubeconfig: {Name:mk882428f840659847d4e22cffee2d7775067610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:04:50.613392  569947 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1219 03:04:50.647032  569947 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1219 03:04:50.647196  569947 kubeadm.go:602] duration metric: took 70.481994ms to restartPrimaryControlPlane
	I1219 03:04:50.647478  569947 kubeadm.go:403] duration metric: took 324.224528ms to StartCluster
	I1219 03:04:50.647573  569947 settings.go:142] acquiring lock: {Name:mkabb1ebf75b28a37c7b2b053110889b555ff453 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:04:50.647991  569947 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-253859/kubeconfig
	I1219 03:04:50.652215  569947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-253859/kubeconfig: {Name:mk882428f840659847d4e22cffee2d7775067610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:04:50.652837  569947 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:04:50.652966  569947 addons.go:70] Setting storage-provisioner=true in profile "no-preload-208281"
	I1219 03:04:50.652984  569947 addons.go:239] Setting addon storage-provisioner=true in "no-preload-208281"
	W1219 03:04:50.652993  569947 addons.go:248] addon storage-provisioner should already be in state true
	I1219 03:04:50.653027  569947 host.go:66] Checking if "no-preload-208281" exists ...
	I1219 03:04:50.653048  569947 config.go:182] Loaded profile config "no-preload-208281": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1219 03:04:50.653120  569947 addons.go:70] Setting default-storageclass=true in profile "no-preload-208281"
	I1219 03:04:50.653135  569947 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-208281"
	I1219 03:04:50.653460  569947 cli_runner.go:164] Run: docker container inspect no-preload-208281 --format={{.State.Status}}
	I1219 03:04:50.653534  569947 cli_runner.go:164] Run: docker container inspect no-preload-208281 --format={{.State.Status}}
	I1219 03:04:50.655588  569947 addons.go:70] Setting metrics-server=true in profile "no-preload-208281"
	I1219 03:04:50.655611  569947 addons.go:239] Setting addon metrics-server=true in "no-preload-208281"
	W1219 03:04:50.655621  569947 addons.go:248] addon metrics-server should already be in state true
	I1219 03:04:50.655656  569947 host.go:66] Checking if "no-preload-208281" exists ...
	I1219 03:04:50.656118  569947 cli_runner.go:164] Run: docker container inspect no-preload-208281 --format={{.State.Status}}
	I1219 03:04:50.656525  569947 addons.go:70] Setting dashboard=true in profile "no-preload-208281"
	I1219 03:04:50.656563  569947 addons.go:239] Setting addon dashboard=true in "no-preload-208281"
	W1219 03:04:50.656574  569947 addons.go:248] addon dashboard should already be in state true
	I1219 03:04:50.656622  569947 host.go:66] Checking if "no-preload-208281" exists ...
	I1219 03:04:50.657316  569947 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1219 03:04:50.657617  569947 cli_runner.go:164] Run: docker container inspect no-preload-208281 --format={{.State.Status}}
	I1219 03:04:50.660722  569947 out.go:179] * Verifying Kubernetes components...
	I1219 03:04:50.661854  569947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:04:50.707508  569947 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1219 03:04:50.708775  569947 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1219 03:04:50.708812  569947 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1219 03:04:50.708834  569947 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:04:50.495202  573699 config.go:182] Loaded profile config "default-k8s-diff-port-103644": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 03:04:50.495941  573699 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 03:04:50.539840  573699 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 03:04:50.540119  573699 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:04:50.710990  573699 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-19 03:04:50.671412726 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:04:50.711217  573699 docker.go:319] overlay module found
	I1219 03:04:50.713697  573699 out.go:179] * Using the docker driver based on existing profile
	I1219 03:04:50.714949  573699 start.go:309] selected driver: docker
	I1219 03:04:50.714970  573699 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-103644 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-103644 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:04:50.715089  573699 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 03:04:50.716020  573699 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:04:50.884011  573699 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-19 03:04:50.859280212 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:04:50.884478  573699 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:04:50.884531  573699 cni.go:84] Creating CNI manager for ""
	I1219 03:04:50.884789  573699 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1219 03:04:50.884940  573699 start.go:353] cluster config:
	{Name:default-k8s-diff-port-103644 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-103644 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:04:50.887403  573699 out.go:179] * Starting "default-k8s-diff-port-103644" primary control-plane node in "default-k8s-diff-port-103644" cluster
	I1219 03:04:50.888689  573699 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1219 03:04:50.889896  573699 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1219 03:04:50.891030  573699 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1219 03:04:50.891092  573699 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-253859/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-amd64.tar.lz4
	I1219 03:04:50.891106  573699 cache.go:65] Caching tarball of preloaded images
	I1219 03:04:50.891194  573699 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1219 03:04:50.891211  573699 preload.go:238] Found /home/jenkins/minikube-integration/22230-253859/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1219 03:04:50.891221  573699 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on containerd
	I1219 03:04:50.891356  573699 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/config.json ...
	I1219 03:04:50.932991  573699 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1219 03:04:50.933024  573699 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1219 03:04:50.933040  573699 cache.go:243] Successfully downloaded all kic artifacts
	I1219 03:04:50.933079  573699 start.go:360] acquireMachinesLock for default-k8s-diff-port-103644: {Name:mk39933c40de3c92aeeb6b9d20d3c90e6af0f1fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:04:50.933158  573699 start.go:364] duration metric: took 48.804µs to acquireMachinesLock for "default-k8s-diff-port-103644"
	I1219 03:04:50.933177  573699 start.go:96] Skipping create...Using existing machine configuration
	I1219 03:04:50.933183  573699 fix.go:54] fixHost starting: 
	I1219 03:04:50.933489  573699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103644 --format={{.State.Status}}
	I1219 03:04:50.973427  573699 fix.go:112] recreateIfNeeded on default-k8s-diff-port-103644: state=Stopped err=<nil>
	W1219 03:04:50.973619  573699 fix.go:138] unexpected machine state, will restart: <nil>
	I1219 03:04:50.748260  566718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (4.195228143s)
	I1219 03:04:50.748361  566718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:04:51.828106  566718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml: (1.079706419s)
	I1219 03:04:51.828277  566718 addons.go:500] Verifying addon dashboard=true in "old-k8s-version-002036"
	I1219 03:04:51.828773  566718 cli_runner.go:164] Run: docker container inspect old-k8s-version-002036 --format={{.State.Status}}
	I1219 03:04:51.856291  566718 out.go:179] * Verifying dashboard addon...
	I1219 03:04:50.708886  569947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208281
	I1219 03:04:50.709108  569947 addons.go:239] Setting addon default-storageclass=true in "no-preload-208281"
	W1219 03:04:50.709132  569947 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:04:50.709161  569947 host.go:66] Checking if "no-preload-208281" exists ...
	I1219 03:04:50.709725  569947 cli_runner.go:164] Run: docker container inspect no-preload-208281 --format={{.State.Status}}
	I1219 03:04:50.710101  569947 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:04:50.710123  569947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:04:50.710173  569947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208281
	I1219 03:04:50.716696  569947 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:04:50.716718  569947 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:04:50.716777  569947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208281
	I1219 03:04:50.770714  569947 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:04:50.770743  569947 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:04:50.770811  569947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208281
	I1219 03:04:50.772323  569947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/no-preload-208281/id_rsa Username:docker}
	I1219 03:04:50.774548  569947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/no-preload-208281/id_rsa Username:docker}
	I1219 03:04:50.782771  569947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/no-preload-208281/id_rsa Username:docker}
	I1219 03:04:50.818125  569947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/no-preload-208281/id_rsa Username:docker}
	I1219 03:04:50.922492  569947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:04:50.961986  569947 node_ready.go:35] waiting up to 6m0s for node "no-preload-208281" to be "Ready" ...
	I1219 03:04:50.964889  569947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:04:50.991305  569947 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1219 03:04:50.991337  569947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1219 03:04:50.997863  569947 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:04:51.029470  569947 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1219 03:04:51.029507  569947 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1219 03:04:51.077218  569947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:04:51.083520  569947 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:04:51.083552  569947 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1219 03:04:51.107276  569947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:04:52.474618  569947 node_ready.go:49] node "no-preload-208281" is "Ready"
	I1219 03:04:52.474662  569947 node_ready.go:38] duration metric: took 1.512481187s for node "no-preload-208281" to be "Ready" ...
	I1219 03:04:52.474682  569947 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:04:52.474743  569947 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:04:51.142743  568301 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.559306992s)
	I1219 03:04:51.142940  568301 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.499593696s)
	I1219 03:04:51.143060  568301 addons.go:500] Verifying addon metrics-server=true in "embed-certs-536489"
	I1219 03:04:51.143722  568301 cli_runner.go:164] Run: docker container inspect embed-certs-536489 --format={{.State.Status}}
	I1219 03:04:51.144038  568301 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.580066034s)
	I1219 03:04:52.990446  568301 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.445475643s)
	I1219 03:04:52.990490  568301 api_server.go:72] duration metric: took 5.685402741s to wait for apiserver process to appear ...
	I1219 03:04:52.990498  568301 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:04:52.990528  568301 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1219 03:04:52.992275  568301 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (4.373532841s)
	I1219 03:04:52.992364  568301 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:04:53.002104  568301 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1219 03:04:53.006331  568301 api_server.go:141] control plane version: v1.34.3
	I1219 03:04:53.006385  568301 api_server.go:131] duration metric: took 15.878835ms to wait for apiserver health ...
	I1219 03:04:53.006399  568301 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:04:53.016977  568301 system_pods.go:59] 9 kube-system pods found
	I1219 03:04:53.017141  568301 system_pods.go:61] "coredns-66bc5c9577-qmb9z" [dd0dceb8-d48d-4215-82f5-df001a8ffe5f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:04:53.017157  568301 system_pods.go:61] "etcd-embed-certs-536489" [b3cbe090-1470-477e-87da-d93ca2bf3394] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:04:53.017165  568301 system_pods.go:61] "kindnet-kzlhv" [2a4d0c65-8aff-4b2f-bb3d-d79b89f560ca] Running
	I1219 03:04:53.017184  568301 system_pods.go:61] "kube-apiserver-embed-certs-536489" [18c7bfaa-73a6-457a-9a58-05d2ffa0de1c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:04:53.017193  568301 system_pods.go:61] "kube-controller-manager-embed-certs-536489" [d657289e-8fd1-4ed3-94c2-194aa95545f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:04:53.017199  568301 system_pods.go:61] "kube-proxy-qhlhx" [bc7f26c2-aed8-4540-bd1f-0ee0b1974137] Running
	I1219 03:04:53.017212  568301 system_pods.go:61] "kube-scheduler-embed-certs-536489" [72b72681-cda6-48b6-9f43-9c9b125883b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:04:53.017219  568301 system_pods.go:61] "metrics-server-746fcd58dc-8458x" [47114157-df98-40be-815f-7437499ca215] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:04:53.017225  568301 system_pods.go:61] "storage-provisioner" [51c90b41-88a3-4279-84d8-13a52b7ef246] Running
	I1219 03:04:53.017233  568301 system_pods.go:74] duration metric: took 10.826754ms to wait for pod list to return data ...
	I1219 03:04:53.017244  568301 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:04:53.020879  568301 default_sa.go:45] found service account: "default"
	I1219 03:04:53.020911  568301 default_sa.go:55] duration metric: took 3.659738ms for default service account to be created ...
	I1219 03:04:53.020925  568301 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:04:53.118092  568301 system_pods.go:86] 9 kube-system pods found
	I1219 03:04:53.118237  568301 system_pods.go:89] "coredns-66bc5c9577-qmb9z" [dd0dceb8-d48d-4215-82f5-df001a8ffe5f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:04:53.118277  568301 system_pods.go:89] "etcd-embed-certs-536489" [b3cbe090-1470-477e-87da-d93ca2bf3394] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:04:53.118286  568301 system_pods.go:89] "kindnet-kzlhv" [2a4d0c65-8aff-4b2f-bb3d-d79b89f560ca] Running
	I1219 03:04:53.118334  568301 system_pods.go:89] "kube-apiserver-embed-certs-536489" [18c7bfaa-73a6-457a-9a58-05d2ffa0de1c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:04:53.118346  568301 system_pods.go:89] "kube-controller-manager-embed-certs-536489" [d657289e-8fd1-4ed3-94c2-194aa95545f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:04:53.118360  568301 system_pods.go:89] "kube-proxy-qhlhx" [bc7f26c2-aed8-4540-bd1f-0ee0b1974137] Running
	I1219 03:04:53.118368  568301 system_pods.go:89] "kube-scheduler-embed-certs-536489" [72b72681-cda6-48b6-9f43-9c9b125883b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:04:53.118508  568301 system_pods.go:89] "metrics-server-746fcd58dc-8458x" [47114157-df98-40be-815f-7437499ca215] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:04:53.118523  568301 system_pods.go:89] "storage-provisioner" [51c90b41-88a3-4279-84d8-13a52b7ef246] Running
	I1219 03:04:53.118535  568301 system_pods.go:126] duration metric: took 97.602528ms to wait for k8s-apps to be running ...
	I1219 03:04:53.118546  568301 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:04:53.118629  568301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:04:53.213539  568301 addons.go:500] Verifying addon dashboard=true in "embed-certs-536489"
	I1219 03:04:53.213985  568301 cli_runner.go:164] Run: docker container inspect embed-certs-536489 --format={{.State.Status}}
	I1219 03:04:53.214117  568301 system_svc.go:56] duration metric: took 95.561896ms WaitForService to wait for kubelet
	I1219 03:04:53.214162  568301 kubeadm.go:587] duration metric: took 5.909072172s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:04:53.214187  568301 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:04:53.220086  568301 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:04:53.220122  568301 node_conditions.go:123] node cpu capacity is 8
	I1219 03:04:53.220143  568301 node_conditions.go:105] duration metric: took 5.94983ms to run NodePressure ...
	I1219 03:04:53.220159  568301 start.go:242] waiting for startup goroutines ...
	I1219 03:04:53.239792  568301 out.go:179] * Verifying dashboard addon...
	I1219 03:04:51.859124  566718 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:04:51.862362  566718 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:04:53.241980  568301 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:04:53.245176  568301 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:04:53.747449  568301 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:04:53.747476  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:54.245867  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:54.747323  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:50.976005  573699 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-103644" ...
	I1219 03:04:50.976124  573699 cli_runner.go:164] Run: docker start default-k8s-diff-port-103644
	I1219 03:04:51.482862  573699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103644 --format={{.State.Status}}
	I1219 03:04:51.514418  573699 kic.go:430] container "default-k8s-diff-port-103644" state is running.
	I1219 03:04:51.515091  573699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-103644
	I1219 03:04:51.545304  573699 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/config.json ...
	I1219 03:04:51.545913  573699 machine.go:94] provisionDockerMachine start ...
	I1219 03:04:51.546012  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:51.578064  573699 main.go:144] libmachine: Using SSH client type: native
	I1219 03:04:51.578471  573699 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1219 03:04:51.578526  573699 main.go:144] libmachine: About to run SSH command:
	hostname
	I1219 03:04:51.580615  573699 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46348->127.0.0.1:33098: read: connection reset by peer
	I1219 03:04:54.740022  573699 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-103644
	
	I1219 03:04:54.740053  573699 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-103644"
	I1219 03:04:54.740121  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:54.764557  573699 main.go:144] libmachine: Using SSH client type: native
	I1219 03:04:54.764812  573699 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1219 03:04:54.764832  573699 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-103644 && echo "default-k8s-diff-port-103644" | sudo tee /etc/hostname
	I1219 03:04:54.940991  573699 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-103644
	
	I1219 03:04:54.941090  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:54.961163  573699 main.go:144] libmachine: Using SSH client type: native
	I1219 03:04:54.961447  573699 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1219 03:04:54.961472  573699 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-103644' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-103644/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-103644' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 03:04:55.112211  573699 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:04:55.112238  573699 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22230-253859/.minikube CaCertPath:/home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22230-253859/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22230-253859/.minikube}
	I1219 03:04:55.112272  573699 ubuntu.go:190] setting up certificates
	I1219 03:04:55.112285  573699 provision.go:84] configureAuth start
	I1219 03:04:55.112354  573699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-103644
	I1219 03:04:55.131633  573699 provision.go:143] copyHostCerts
	I1219 03:04:55.131701  573699 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-253859/.minikube/ca.pem, removing ...
	I1219 03:04:55.131722  573699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-253859/.minikube/ca.pem
	I1219 03:04:55.131814  573699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22230-253859/.minikube/ca.pem (1078 bytes)
	I1219 03:04:55.131992  573699 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-253859/.minikube/cert.pem, removing ...
	I1219 03:04:55.132009  573699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-253859/.minikube/cert.pem
	I1219 03:04:55.132066  573699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22230-253859/.minikube/cert.pem (1123 bytes)
	I1219 03:04:55.132178  573699 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-253859/.minikube/key.pem, removing ...
	I1219 03:04:55.132189  573699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-253859/.minikube/key.pem
	I1219 03:04:55.132230  573699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22230-253859/.minikube/key.pem (1675 bytes)
	I1219 03:04:55.132339  573699 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22230-253859/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-103644 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-103644 localhost minikube]
	I1219 03:04:55.201421  573699 provision.go:177] copyRemoteCerts
	I1219 03:04:55.201486  573699 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 03:04:55.201545  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:55.220254  573699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/default-k8s-diff-port-103644/id_rsa Username:docker}
	I1219 03:04:55.324809  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1219 03:04:55.344299  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1219 03:04:55.364633  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1219 03:04:55.383945  573699 provision.go:87] duration metric: took 271.644189ms to configureAuth
	I1219 03:04:55.383975  573699 ubuntu.go:206] setting minikube options for container-runtime
	I1219 03:04:55.384174  573699 config.go:182] Loaded profile config "default-k8s-diff-port-103644": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 03:04:55.384190  573699 machine.go:97] duration metric: took 3.838258422s to provisionDockerMachine
	I1219 03:04:55.384201  573699 start.go:293] postStartSetup for "default-k8s-diff-port-103644" (driver="docker")
	I1219 03:04:55.384218  573699 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 03:04:55.384292  573699 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 03:04:55.384363  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:55.402689  573699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/default-k8s-diff-port-103644/id_rsa Username:docker}
	I1219 03:04:55.509385  573699 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 03:04:55.513698  573699 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1219 03:04:55.513738  573699 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1219 03:04:55.513752  573699 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-253859/.minikube/addons for local assets ...
	I1219 03:04:55.513809  573699 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-253859/.minikube/files for local assets ...
	I1219 03:04:55.513923  573699 filesync.go:149] local asset: /home/jenkins/minikube-integration/22230-253859/.minikube/files/etc/ssl/certs/2574932.pem -> 2574932.pem in /etc/ssl/certs
	I1219 03:04:55.514061  573699 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 03:04:55.522610  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/files/etc/ssl/certs/2574932.pem --> /etc/ssl/certs/2574932.pem (1708 bytes)
	I1219 03:04:55.542136  573699 start.go:296] duration metric: took 157.911131ms for postStartSetup
	I1219 03:04:55.542235  573699 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 03:04:55.542278  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:55.560317  573699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/default-k8s-diff-port-103644/id_rsa Username:docker}
	I1219 03:04:55.676892  573699 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1219 03:04:55.683207  573699 fix.go:56] duration metric: took 4.75001221s for fixHost
	I1219 03:04:55.683240  573699 start.go:83] releasing machines lock for "default-k8s-diff-port-103644", held for 4.750073001s
	I1219 03:04:55.683337  573699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-103644
	I1219 03:04:55.706632  573699 ssh_runner.go:195] Run: cat /version.json
	I1219 03:04:55.706696  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:55.706708  573699 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 03:04:55.706796  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:55.729248  573699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/default-k8s-diff-port-103644/id_rsa Username:docker}
	I1219 03:04:55.729555  573699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/default-k8s-diff-port-103644/id_rsa Username:docker}
	I1219 03:04:55.832375  573699 ssh_runner.go:195] Run: systemctl --version
	I1219 03:04:55.888761  573699 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 03:04:55.894089  573699 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 03:04:55.894170  573699 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 03:04:55.902973  573699 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1219 03:04:55.903001  573699 start.go:496] detecting cgroup driver to use...
	I1219 03:04:55.903039  573699 detect.go:190] detected "systemd" cgroup driver on host os
	I1219 03:04:55.903123  573699 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1219 03:04:55.924413  573699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1219 03:04:55.939247  573699 docker.go:218] disabling cri-docker service (if available) ...
	I1219 03:04:55.939312  573699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1219 03:04:55.955848  573699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1219 03:04:55.970636  573699 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1219 03:04:56.060548  573699 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1219 03:04:56.151469  573699 docker.go:234] disabling docker service ...
	I1219 03:04:56.151544  573699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1219 03:04:56.168733  573699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1219 03:04:56.183785  573699 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1219 03:04:56.269923  573699 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1219 03:04:56.358410  573699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1219 03:04:56.374184  573699 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 03:04:56.391509  573699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1219 03:04:56.403885  573699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1219 03:04:56.418704  573699 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1219 03:04:56.418843  573699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1219 03:04:56.432502  573699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1219 03:04:56.446280  573699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1219 03:04:56.458732  573699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1219 03:04:56.471691  573699 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 03:04:56.482737  573699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1219 03:04:56.494667  573699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1219 03:04:56.507284  573699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1219 03:04:56.520174  573699 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 03:04:56.530768  573699 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 03:04:56.541170  573699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:04:56.646657  573699 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1219 03:04:56.781992  573699 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1219 03:04:56.782112  573699 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1219 03:04:56.788198  573699 start.go:564] Will wait 60s for crictl version
	I1219 03:04:56.788285  573699 ssh_runner.go:195] Run: which crictl
	I1219 03:04:56.793113  573699 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1219 03:04:56.836402  573699 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1219 03:04:56.836474  573699 ssh_runner.go:195] Run: containerd --version
	I1219 03:04:56.864133  573699 ssh_runner.go:195] Run: containerd --version
	I1219 03:04:56.898122  573699 out.go:179] * Preparing Kubernetes v1.34.3 on containerd 2.2.0 ...
	I1219 03:04:53.197683  569947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.23269288s)
	I1219 03:04:53.197756  569947 ssh_runner.go:235] Completed: test -f /usr/bin/helm: (2.199861038s)
	I1219 03:04:53.197848  569947 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:04:53.197862  569947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.120620602s)
	I1219 03:04:53.198058  569947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.09074876s)
	I1219 03:04:53.198096  569947 addons.go:500] Verifying addon metrics-server=true in "no-preload-208281"
	I1219 03:04:53.198179  569947 api_server.go:72] duration metric: took 2.540661776s to wait for apiserver process to appear ...
	I1219 03:04:53.198202  569947 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:04:53.198229  569947 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1219 03:04:53.198445  569947 cli_runner.go:164] Run: docker container inspect no-preload-208281 --format={{.State.Status}}
	I1219 03:04:53.205510  569947 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:04:53.205637  569947 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:04:53.205671  569947 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:04:53.698608  569947 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1219 03:04:53.705658  569947 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:04:53.705697  569947 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:04:54.198361  569947 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1219 03:04:54.202897  569947 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1219 03:04:54.204079  569947 api_server.go:141] control plane version: v1.35.0-rc.1
	I1219 03:04:54.204114  569947 api_server.go:131] duration metric: took 1.005903946s to wait for apiserver health ...
	I1219 03:04:54.204127  569947 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:04:54.208336  569947 system_pods.go:59] 9 kube-system pods found
	I1219 03:04:54.208377  569947 system_pods.go:61] "coredns-7d764666f9-hm5hz" [59441d91-a2b7-4d87-86d1-5ccaaec4e398] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:04:54.208389  569947 system_pods.go:61] "etcd-no-preload-208281" [edfe3a0f-95b1-49ee-8843-456255c2c573] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:04:54.208403  569947 system_pods.go:61] "kindnet-zbmbl" [e7d80d3e-7bf1-4e49-b7f9-c0911bbae20d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1219 03:04:54.208424  569947 system_pods.go:61] "kube-apiserver-no-preload-208281" [cee547f9-b6ae-4654-b92b-5cd3c5caae01] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:04:54.208437  569947 system_pods.go:61] "kube-controller-manager-no-preload-208281" [ed375fa0-c03b-42d4-9887-cbe64ed19aeb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:04:54.208444  569947 system_pods.go:61] "kube-proxy-xst8w" [24d16e46-3e1f-4d38-a486-8f15642946c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:04:54.208460  569947 system_pods.go:61] "kube-scheduler-no-preload-208281" [65c63f44-2615-47ca-9323-d80a812af086] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:04:54.208472  569947 system_pods.go:61] "metrics-server-5d785b57d4-zgcxz" [743fe6aa-308c-4f80-b7f5-c753be058b69] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:04:54.208477  569947 system_pods.go:61] "storage-provisioner" [5bab6e7d-150b-4c8e-ab0a-933ec046c863] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:04:54.208488  569947 system_pods.go:74] duration metric: took 4.352835ms to wait for pod list to return data ...
	I1219 03:04:54.208503  569947 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:04:54.211346  569947 default_sa.go:45] found service account: "default"
	I1219 03:04:54.211373  569947 default_sa.go:55] duration metric: took 2.86243ms for default service account to be created ...
	I1219 03:04:54.211385  569947 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:04:54.214301  569947 system_pods.go:86] 9 kube-system pods found
	I1219 03:04:54.214337  569947 system_pods.go:89] "coredns-7d764666f9-hm5hz" [59441d91-a2b7-4d87-86d1-5ccaaec4e398] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:04:54.214347  569947 system_pods.go:89] "etcd-no-preload-208281" [edfe3a0f-95b1-49ee-8843-456255c2c573] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:04:54.214360  569947 system_pods.go:89] "kindnet-zbmbl" [e7d80d3e-7bf1-4e49-b7f9-c0911bbae20d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1219 03:04:54.214369  569947 system_pods.go:89] "kube-apiserver-no-preload-208281" [cee547f9-b6ae-4654-b92b-5cd3c5caae01] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:04:54.214377  569947 system_pods.go:89] "kube-controller-manager-no-preload-208281" [ed375fa0-c03b-42d4-9887-cbe64ed19aeb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:04:54.214386  569947 system_pods.go:89] "kube-proxy-xst8w" [24d16e46-3e1f-4d38-a486-8f15642946c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:04:54.214402  569947 system_pods.go:89] "kube-scheduler-no-preload-208281" [65c63f44-2615-47ca-9323-d80a812af086] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:04:54.214411  569947 system_pods.go:89] "metrics-server-5d785b57d4-zgcxz" [743fe6aa-308c-4f80-b7f5-c753be058b69] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:04:54.214421  569947 system_pods.go:89] "storage-provisioner" [5bab6e7d-150b-4c8e-ab0a-933ec046c863] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:04:54.214431  569947 system_pods.go:126] duration metric: took 3.039478ms to wait for k8s-apps to be running ...
	I1219 03:04:54.214443  569947 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:04:54.214504  569947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:04:54.371132  569947 ssh_runner.go:235] Completed: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh": (1.165499888s)
	I1219 03:04:54.371186  569947 system_svc.go:56] duration metric: took 156.734958ms WaitForService to wait for kubelet
	I1219 03:04:54.371215  569947 kubeadm.go:587] duration metric: took 3.713723941s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:04:54.371244  569947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:04:54.371246  569947 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:04:54.374625  569947 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:04:54.374660  569947 node_conditions.go:123] node cpu capacity is 8
	I1219 03:04:54.374679  569947 node_conditions.go:105] duration metric: took 3.423654ms to run NodePressure ...
	I1219 03:04:54.374695  569947 start.go:242] waiting for startup goroutines ...
	I1219 03:04:57.635651  569947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.264367144s)
	I1219 03:04:57.635887  569947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:04:57.949184  569947 addons.go:500] Verifying addon dashboard=true in "no-preload-208281"
	I1219 03:04:57.949557  569947 cli_runner.go:164] Run: docker container inspect no-preload-208281 --format={{.State.Status}}
	I1219 03:04:57.976511  569947 out.go:179] * Verifying dashboard addon...
	I1219 03:04:56.899304  573699 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-103644 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1219 03:04:56.919626  573699 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1219 03:04:56.924517  573699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:04:56.937946  573699 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-103644 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-103644 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 03:04:56.938108  573699 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1219 03:04:56.938182  573699 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:04:56.968240  573699 containerd.go:627] all images are preloaded for containerd runtime.
	I1219 03:04:56.968267  573699 containerd.go:534] Images already preloaded, skipping extraction
	I1219 03:04:56.968327  573699 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:04:56.997359  573699 containerd.go:627] all images are preloaded for containerd runtime.
	I1219 03:04:56.997383  573699 cache_images.go:86] Images are preloaded, skipping loading
	I1219 03:04:56.997392  573699 kubeadm.go:935] updating node { 192.168.94.2 8444 v1.34.3 containerd true true} ...
	I1219 03:04:56.997515  573699 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-103644 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-103644 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 03:04:56.997591  573699 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1219 03:04:57.033726  573699 cni.go:84] Creating CNI manager for ""
	I1219 03:04:57.033760  573699 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1219 03:04:57.033788  573699 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1219 03:04:57.033818  573699 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-103644 NodeName:default-k8s-diff-port-103644 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 03:04:57.034013  573699 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-103644"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 03:04:57.034110  573699 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1219 03:04:57.054291  573699 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 03:04:57.054366  573699 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 03:04:57.069183  573699 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I1219 03:04:57.092986  573699 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1219 03:04:57.114537  573699 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2240 bytes)
	I1219 03:04:57.135768  573699 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1219 03:04:57.141830  573699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:04:57.157200  573699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:04:57.285296  573699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:04:57.321401  573699 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644 for IP: 192.168.94.2
	I1219 03:04:57.321425  573699 certs.go:195] generating shared ca certs ...
	I1219 03:04:57.321445  573699 certs.go:227] acquiring lock for ca certs: {Name:mk50e31410087b5c6cdb0986368a8c9100618403 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:04:57.321651  573699 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-253859/.minikube/ca.key
	I1219 03:04:57.321728  573699 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-253859/.minikube/proxy-client-ca.key
	I1219 03:04:57.321741  573699 certs.go:257] generating profile certs ...
	I1219 03:04:57.321895  573699 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/client.key
	I1219 03:04:57.321969  573699 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/apiserver.key.eac4724a
	I1219 03:04:57.322032  573699 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/proxy-client.key
	I1219 03:04:57.322452  573699 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/257493.pem (1338 bytes)
	W1219 03:04:57.322563  573699 certs.go:480] ignoring /home/jenkins/minikube-integration/22230-253859/.minikube/certs/257493_empty.pem, impossibly tiny 0 bytes
	I1219 03:04:57.322947  573699 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca-key.pem (1679 bytes)
	I1219 03:04:57.323038  573699 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca.pem (1078 bytes)
	I1219 03:04:57.323130  573699 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/cert.pem (1123 bytes)
	I1219 03:04:57.323212  573699 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/key.pem (1675 bytes)
	I1219 03:04:57.323310  573699 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/files/etc/ssl/certs/2574932.pem (1708 bytes)
	I1219 03:04:57.324261  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 03:04:57.367430  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 03:04:57.395772  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 03:04:57.447975  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1219 03:04:57.485724  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1219 03:04:57.550160  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1219 03:04:57.586359  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 03:04:57.650368  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1219 03:04:57.705528  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/certs/257493.pem --> /usr/share/ca-certificates/257493.pem (1338 bytes)
	I1219 03:04:57.753827  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/files/etc/ssl/certs/2574932.pem --> /usr/share/ca-certificates/2574932.pem (1708 bytes)
	I1219 03:04:57.796129  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 03:04:57.846633  573699 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 03:04:57.874041  573699 ssh_runner.go:195] Run: openssl version
	I1219 03:04:57.883186  573699 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:04:57.893276  573699 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 03:04:57.903322  573699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:04:57.908713  573699 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:26 /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:04:57.908788  573699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:04:57.959424  573699 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 03:04:57.975955  573699 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/257493.pem
	I1219 03:04:57.987406  573699 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/257493.pem /etc/ssl/certs/257493.pem
	I1219 03:04:57.999924  573699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/257493.pem
	I1219 03:04:58.007017  573699 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 19 02:34 /usr/share/ca-certificates/257493.pem
	I1219 03:04:58.007094  573699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/257493.pem
	I1219 03:04:58.066450  573699 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1219 03:04:58.084889  573699 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2574932.pem
	I1219 03:04:58.104839  573699 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2574932.pem /etc/ssl/certs/2574932.pem
	I1219 03:04:58.121039  573699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2574932.pem
	I1219 03:04:58.128831  573699 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 19 02:34 /usr/share/ca-certificates/2574932.pem
	I1219 03:04:58.128908  573699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2574932.pem
	I1219 03:04:58.238719  573699 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1219 03:04:58.257473  573699 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 03:04:58.269077  573699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1219 03:04:58.373050  573699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1219 03:04:58.472122  573699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1219 03:04:58.523474  573699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1219 03:04:58.567812  573699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1219 03:04:58.624150  573699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1219 03:04:58.663023  573699 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-103644 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-103644 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:04:58.663147  573699 cri.go:57] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1219 03:04:58.663225  573699 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:04:58.698055  573699 cri.go:92] found id: "19baa8a9717c29181666ffb3efdceea762fabc94b416863c5b53dc5a31a91c9c"
	I1219 03:04:58.698124  573699 cri.go:92] found id: "c2591b42ec56d4e729eb5050bae8a11599a1d848e2b407c779bd43019d38acd7"
	I1219 03:04:58.698150  573699 cri.go:92] found id: "a8858dc4fe6cf1222bb421499d0aaa734e7a15be6fd8f63ebd4a4f91de0515c1"
	I1219 03:04:58.698161  573699 cri.go:92] found id: "fc945986f0d8b51799666a6eb31fb812b90a17842cc95aa6d352ef48b62f0652"
	I1219 03:04:58.698166  573699 cri.go:92] found id: "36e5d694c8907189486901b0aad40fae056b856f62180e718acb50ce029ecd0d"
	I1219 03:04:58.698176  573699 cri.go:92] found id: "ed906de27de9c3783be2432f68b3e79b562b368da4fe5ddde333748fe58c2534"
	I1219 03:04:58.698180  573699 cri.go:92] found id: "72384f1ad49d78ec6df6f3d6d752884b9f63349eb0cf50bb752be26fdba3141d"
	I1219 03:04:58.698185  573699 cri.go:92] found id: "872846ec96d2d949d4a7f1a644718c7d0f80e0e28fcde0d7425648a2ffc89358"
	I1219 03:04:58.698189  573699 cri.go:92] found id: "dd57b66fad064bb42b78fb88e0736406dd552174022391793749a16c31b46525"
	I1219 03:04:58.698201  573699 cri.go:92] found id: "ee8c252f3d8f4aee54318214731e5386b3c089ad31c19108f2f01301f0698503"
	I1219 03:04:58.698208  573699 cri.go:92] found id: "069eca43bbcc0eb20a0e387dc92839a2dc811dd9acdf65cf2e9fe7389f32d3cd"
	I1219 03:04:58.698212  573699 cri.go:92] found id: "49ae9ae9664179746d1cc4f0b2904783aa2c4b1e268ce918bf8eb4eec3c61233"
	I1219 03:04:58.698216  573699 cri.go:92] found id: ""
	I1219 03:04:58.698271  573699 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1219 03:04:58.725948  573699 cri.go:119] JSON = [{"ociVersion":"1.2.1","id":"0074e46caf6e41a18e3d0f5443787c1bc06e1d75e37fd459fd30404491fd7537","pid":862,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0074e46caf6e41a18e3d0f5443787c1bc06e1d75e37fd459fd30404491fd7537","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0074e46caf6e41a18e3d0f5443787c1bc06e1d75e37fd459fd30404491fd7537/rootfs","created":"2025-12-19T03:04:58.065318041Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"0074e46caf6e41a18e3d0f5443787c1bc06e1d75e37fd459fd30404491fd7537","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-default-k8s-diff-port-103644_50f4d1ce4fca33a4531f882f5fb97a4e","io.kubernetes.cri.sa
ndbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-default-k8s-diff-port-103644","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"50f4d1ce4fca33a4531f882f5fb97a4e"},"owner":"root"},{"ociVersion":"1.2.1","id":"19baa8a9717c29181666ffb3efdceea762fabc94b416863c5b53dc5a31a91c9c","pid":981,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/19baa8a9717c29181666ffb3efdceea762fabc94b416863c5b53dc5a31a91c9c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/19baa8a9717c29181666ffb3efdceea762fabc94b416863c5b53dc5a31a91c9c/rootfs","created":"2025-12-19T03:04:58.375811399Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.34.3","io.kubernetes.cri.sandbox-id":"6f1ccb334eeac83f482c9566bcf07d1b7f0691e38f246bd1e2b18a839382f2be","io.kubernetes.cri.sandbox-name":"kube-controller-manager-
default-k8s-diff-port-103644","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"ac53bb8a0832eefbaa4a648be6aad901"},"owner":"root"},{"ociVersion":"1.2.1","id":"6f1ccb334eeac83f482c9566bcf07d1b7f0691e38f246bd1e2b18a839382f2be","pid":855,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f1ccb334eeac83f482c9566bcf07d1b7f0691e38f246bd1e2b18a839382f2be","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f1ccb334eeac83f482c9566bcf07d1b7f0691e38f246bd1e2b18a839382f2be/rootfs","created":"2025-12-19T03:04:58.067793692Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"6f1ccb334eeac83f482c9566bcf07d1b7f0691e38f246bd1e2b18a839382f2be","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube
-system_kube-controller-manager-default-k8s-diff-port-103644_ac53bb8a0832eefbaa4a648be6aad901","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-default-k8s-diff-port-103644","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"ac53bb8a0832eefbaa4a648be6aad901"},"owner":"root"},{"ociVersion":"1.2.1","id":"6fa84325f8005a8cda67fb08f0d9185af05913fd2ed0468866c0616c98bcca9f","pid":834,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6fa84325f8005a8cda67fb08f0d9185af05913fd2ed0468866c0616c98bcca9f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6fa84325f8005a8cda67fb08f0d9185af05913fd2ed0468866c0616c98bcca9f/rootfs","created":"2025-12-19T03:04:58.050783422Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernet
es.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"6fa84325f8005a8cda67fb08f0d9185af05913fd2ed0468866c0616c98bcca9f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-default-k8s-diff-port-103644_996cf4b38188d4b0d664648ad2102013","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-default-k8s-diff-port-103644","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"996cf4b38188d4b0d664648ad2102013"},"owner":"root"},{"ociVersion":"1.2.1","id":"84008f4f31b82c848a4941fccaee8d78c5d94eb07bb5b35b53ea6f10f5e460bc","pid":796,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/84008f4f31b82c848a4941fccaee8d78c5d94eb07bb5b35b53ea6f10f5e460bc","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/84008f4f31b82c848a4941fccaee8d78c5d94eb07bb5b35b53ea6f10f5e460bc/rootfs","created":"2025-12-19T03:04:58.031779484Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","
io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"84008f4f31b82c848a4941fccaee8d78c5d94eb07bb5b35b53ea6f10f5e460bc","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-default-k8s-diff-port-103644_4275d7c883d3f735b8de47264bc63415","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-default-k8s-diff-port-103644","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"4275d7c883d3f735b8de47264bc63415"},"owner":"root"},{"ociVersion":"1.2.1","id":"a8858dc4fe6cf1222bb421499d0aaa734e7a15be6fd8f63ebd4a4f91de0515c1","pid":951,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a8858dc4fe6cf1222bb421499d0aaa734e7a15be6fd8f63ebd4a4f91de0515c1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a8858dc4fe6cf1222bb4214
99d0aaa734e7a15be6fd8f63ebd4a4f91de0515c1/rootfs","created":"2025-12-19T03:04:58.294875595Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.34.3","io.kubernetes.cri.sandbox-id":"6fa84325f8005a8cda67fb08f0d9185af05913fd2ed0468866c0616c98bcca9f","io.kubernetes.cri.sandbox-name":"kube-apiserver-default-k8s-diff-port-103644","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"996cf4b38188d4b0d664648ad2102013"},"owner":"root"},{"ociVersion":"1.2.1","id":"c2591b42ec56d4e729eb5050bae8a11599a1d848e2b407c779bd43019d38acd7","pid":969,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c2591b42ec56d4e729eb5050bae8a11599a1d848e2b407c779bd43019d38acd7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c2591b42ec56d4e729eb5050bae8a11599a1d848e2b407c779bd43019d38acd7/rootfs","created":"2025-12-19T03:04:58.293243949Z","
annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.34.3","io.kubernetes.cri.sandbox-id":"0074e46caf6e41a18e3d0f5443787c1bc06e1d75e37fd459fd30404491fd7537","io.kubernetes.cri.sandbox-name":"kube-scheduler-default-k8s-diff-port-103644","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"50f4d1ce4fca33a4531f882f5fb97a4e"},"owner":"root"},{"ociVersion":"1.2.1","id":"fc945986f0d8b51799666a6eb31fb812b90a17842cc95aa6d352ef48b62f0652","pid":915,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fc945986f0d8b51799666a6eb31fb812b90a17842cc95aa6d352ef48b62f0652","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fc945986f0d8b51799666a6eb31fb812b90a17842cc95aa6d352ef48b62f0652/rootfs","created":"2025-12-19T03:04:58.225549561Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"co
ntainer","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.6.5-0","io.kubernetes.cri.sandbox-id":"84008f4f31b82c848a4941fccaee8d78c5d94eb07bb5b35b53ea6f10f5e460bc","io.kubernetes.cri.sandbox-name":"etcd-default-k8s-diff-port-103644","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"4275d7c883d3f735b8de47264bc63415"},"owner":"root"}]
	I1219 03:04:58.726160  573699 cri.go:129] list returned 8 containers
	I1219 03:04:58.726176  573699 cri.go:132] container: {ID:0074e46caf6e41a18e3d0f5443787c1bc06e1d75e37fd459fd30404491fd7537 Status:running}
	I1219 03:04:58.726215  573699 cri.go:134] skipping 0074e46caf6e41a18e3d0f5443787c1bc06e1d75e37fd459fd30404491fd7537 - not in ps
	I1219 03:04:58.726225  573699 cri.go:132] container: {ID:19baa8a9717c29181666ffb3efdceea762fabc94b416863c5b53dc5a31a91c9c Status:running}
	I1219 03:04:58.726238  573699 cri.go:138] skipping {19baa8a9717c29181666ffb3efdceea762fabc94b416863c5b53dc5a31a91c9c running}: state = "running", want "paused"
	I1219 03:04:58.726253  573699 cri.go:132] container: {ID:6f1ccb334eeac83f482c9566bcf07d1b7f0691e38f246bd1e2b18a839382f2be Status:running}
	I1219 03:04:58.726263  573699 cri.go:134] skipping 6f1ccb334eeac83f482c9566bcf07d1b7f0691e38f246bd1e2b18a839382f2be - not in ps
	I1219 03:04:58.726272  573699 cri.go:132] container: {ID:6fa84325f8005a8cda67fb08f0d9185af05913fd2ed0468866c0616c98bcca9f Status:running}
	I1219 03:04:58.726282  573699 cri.go:134] skipping 6fa84325f8005a8cda67fb08f0d9185af05913fd2ed0468866c0616c98bcca9f - not in ps
	I1219 03:04:58.726287  573699 cri.go:132] container: {ID:84008f4f31b82c848a4941fccaee8d78c5d94eb07bb5b35b53ea6f10f5e460bc Status:running}
	I1219 03:04:58.726296  573699 cri.go:134] skipping 84008f4f31b82c848a4941fccaee8d78c5d94eb07bb5b35b53ea6f10f5e460bc - not in ps
	I1219 03:04:58.726300  573699 cri.go:132] container: {ID:a8858dc4fe6cf1222bb421499d0aaa734e7a15be6fd8f63ebd4a4f91de0515c1 Status:running}
	I1219 03:04:58.726310  573699 cri.go:138] skipping {a8858dc4fe6cf1222bb421499d0aaa734e7a15be6fd8f63ebd4a4f91de0515c1 running}: state = "running", want "paused"
	I1219 03:04:58.726317  573699 cri.go:132] container: {ID:c2591b42ec56d4e729eb5050bae8a11599a1d848e2b407c779bd43019d38acd7 Status:running}
	I1219 03:04:58.726327  573699 cri.go:138] skipping {c2591b42ec56d4e729eb5050bae8a11599a1d848e2b407c779bd43019d38acd7 running}: state = "running", want "paused"
	I1219 03:04:58.726334  573699 cri.go:132] container: {ID:fc945986f0d8b51799666a6eb31fb812b90a17842cc95aa6d352ef48b62f0652 Status:running}
	I1219 03:04:58.726341  573699 cri.go:138] skipping {fc945986f0d8b51799666a6eb31fb812b90a17842cc95aa6d352ef48b62f0652 running}: state = "running", want "paused"
	I1219 03:04:58.726406  573699 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1219 03:04:58.736002  573699 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1219 03:04:58.736024  573699 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1219 03:04:58.736083  573699 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1219 03:04:58.745325  573699 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1219 03:04:58.746851  573699 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-103644" does not appear in /home/jenkins/minikube-integration/22230-253859/kubeconfig
	I1219 03:04:58.747840  573699 kubeconfig.go:62] /home/jenkins/minikube-integration/22230-253859/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-103644" cluster setting kubeconfig missing "default-k8s-diff-port-103644" context setting]
	I1219 03:04:58.749236  573699 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-253859/kubeconfig: {Name:mk882428f840659847d4e22cffee2d7775067610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:04:58.751783  573699 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1219 03:04:58.761185  573699 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1219 03:04:58.761233  573699 kubeadm.go:602] duration metric: took 25.202742ms to restartPrimaryControlPlane
	I1219 03:04:58.761245  573699 kubeadm.go:403] duration metric: took 98.23938ms to StartCluster
	I1219 03:04:58.761266  573699 settings.go:142] acquiring lock: {Name:mkabb1ebf75b28a37c7b2b053110889b555ff453 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:04:58.761344  573699 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-253859/kubeconfig
	I1219 03:04:58.763956  573699 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-253859/kubeconfig: {Name:mk882428f840659847d4e22cffee2d7775067610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:04:58.764278  573699 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1219 03:04:58.764352  573699 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:04:58.764458  573699 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-103644"
	I1219 03:04:58.764482  573699 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-103644"
	W1219 03:04:58.764491  573699 addons.go:248] addon storage-provisioner should already be in state true
	I1219 03:04:58.764498  573699 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-103644"
	I1219 03:04:58.764518  573699 config.go:182] Loaded profile config "default-k8s-diff-port-103644": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 03:04:58.764533  573699 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-103644"
	I1219 03:04:58.764530  573699 addons.go:70] Setting metrics-server=true in profile "default-k8s-diff-port-103644"
	I1219 03:04:58.764551  573699 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-103644"
	I1219 03:04:58.764557  573699 addons.go:239] Setting addon metrics-server=true in "default-k8s-diff-port-103644"
	I1219 03:04:58.764521  573699 host.go:66] Checking if "default-k8s-diff-port-103644" exists ...
	I1219 03:04:58.764523  573699 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-103644"
	W1219 03:04:58.764565  573699 addons.go:248] addon metrics-server should already be in state true
	I1219 03:04:58.764660  573699 host.go:66] Checking if "default-k8s-diff-port-103644" exists ...
	I1219 03:04:58.764898  573699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103644 --format={{.State.Status}}
	I1219 03:04:58.765067  573699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103644 --format={{.State.Status}}
	W1219 03:04:58.764563  573699 addons.go:248] addon dashboard should already be in state true
	I1219 03:04:58.765224  573699 host.go:66] Checking if "default-k8s-diff-port-103644" exists ...
	I1219 03:04:58.765244  573699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103644 --format={{.State.Status}}
	I1219 03:04:58.765778  573699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103644 --format={{.State.Status}}
	I1219 03:04:58.766439  573699 out.go:179] * Verifying Kubernetes components...
	I1219 03:04:58.769848  573699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:04:58.795158  573699 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-103644"
	W1219 03:04:58.795295  573699 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:04:58.795354  573699 host.go:66] Checking if "default-k8s-diff-port-103644" exists ...
	I1219 03:04:58.796260  573699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103644 --format={{.State.Status}}
	I1219 03:04:58.798810  573699 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1219 03:04:58.798816  573699 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:04:57.865290  566718 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:04:57.865322  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:58.373051  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:58.867408  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:59.364332  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:55.245497  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:55.746387  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:56.245217  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:56.749455  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:57.246279  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:57.748208  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:58.247627  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:58.745395  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:59.247400  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:59.747210  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:58.799225  573699 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:04:58.799247  573699 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:04:58.799304  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:58.799993  573699 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1219 03:04:58.800017  573699 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1219 03:04:58.800075  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:58.800356  573699 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:04:58.800371  573699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:04:58.800429  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:58.837919  573699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/default-k8s-diff-port-103644/id_rsa Username:docker}
	I1219 03:04:58.838753  573699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/default-k8s-diff-port-103644/id_rsa Username:docker}
	I1219 03:04:58.846681  573699 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:04:58.846725  573699 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:04:58.846799  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:58.869014  573699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/default-k8s-diff-port-103644/id_rsa Username:docker}
	I1219 03:04:58.891596  573699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/default-k8s-diff-port-103644/id_rsa Username:docker}
	I1219 03:04:58.990117  573699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:04:59.008626  573699 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:04:59.009409  573699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:04:59.016187  573699 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-103644" to be "Ready" ...
	I1219 03:04:59.016907  573699 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:04:59.044939  573699 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1219 03:04:59.044973  573699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1219 03:04:59.048120  573699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:04:59.087063  573699 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1219 03:04:59.087153  573699 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1219 03:04:59.114132  573699 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:04:59.114163  573699 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1219 03:04:59.144085  573699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:05:00.372562  573699 node_ready.go:49] node "default-k8s-diff-port-103644" is "Ready"
	I1219 03:05:00.372622  573699 node_ready.go:38] duration metric: took 1.356373278s for node "default-k8s-diff-port-103644" to be "Ready" ...
	I1219 03:05:00.372644  573699 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:05:00.372706  573699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:04:57.979521  569947 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:04:57.983495  569947 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:04:57.983523  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:58.489816  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:58.984080  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:59.484148  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:59.983915  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:00.484939  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:00.985080  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:01.486418  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:01.986557  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:02.484684  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:59.866115  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:00.365239  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:00.866184  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:01.366415  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:01.863549  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:02.364375  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:02.863998  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:03.363890  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:03.863749  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:04.382768  566718 kapi.go:107] duration metric: took 12.523639555s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	I1219 03:05:04.433515  566718 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-002036 addons enable metrics-server
	
	I1219 03:05:04.435631  566718 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1219 03:05:04.437408  566718 addons.go:546] duration metric: took 22.668379604s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I1219 03:05:04.437463  566718 start.go:247] waiting for cluster config update ...
	I1219 03:05:04.437482  566718 start.go:256] writing updated cluster config ...
	I1219 03:05:04.437853  566718 ssh_runner.go:195] Run: rm -f paused
	I1219 03:05:04.443668  566718 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:05:04.450779  566718 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-l88tx" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:00.248093  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:00.749216  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:01.247778  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:01.747890  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:02.245449  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:02.746684  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:03.247359  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:03.746557  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:04.245966  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:04.746278  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:01.448117  573699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.43867528s)
	I1219 03:05:01.448182  573699 ssh_runner.go:235] Completed: test -f /usr/local/bin/helm: (2.431240621s)
	I1219 03:05:01.448196  573699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.399991052s)
	I1219 03:05:01.448260  573699 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:05:01.448385  573699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.304270108s)
	I1219 03:05:01.448406  573699 addons.go:500] Verifying addon metrics-server=true in "default-k8s-diff-port-103644"
	I1219 03:05:01.448485  573699 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.075756393s)
	I1219 03:05:01.448520  573699 api_server.go:72] duration metric: took 2.684209271s to wait for apiserver process to appear ...
	I1219 03:05:01.448536  573699 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:05:01.448558  573699 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1219 03:05:01.448716  573699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103644 --format={{.State.Status}}
	I1219 03:05:01.458744  573699 api_server.go:279] https://192.168.94.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:05:01.458783  573699 api_server.go:103] status: https://192.168.94.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:05:01.950069  573699 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1219 03:05:01.959300  573699 api_server.go:279] https://192.168.94.2:8444/healthz returned 200:
	ok
	I1219 03:05:01.960703  573699 api_server.go:141] control plane version: v1.34.3
	I1219 03:05:01.960739  573699 api_server.go:131] duration metric: took 512.19419ms to wait for apiserver health ...
	I1219 03:05:01.960751  573699 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:05:01.965477  573699 system_pods.go:59] 9 kube-system pods found
	I1219 03:05:01.965544  573699 system_pods.go:61] "coredns-66bc5c9577-86vsf" [d2b924f3-ac71-431b-a3e6-f85f1e0b94e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:01.965560  573699 system_pods.go:61] "etcd-default-k8s-diff-port-103644" [ececfad7-09c9-4851-9fda-c468648a6e3f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:05:01.965595  573699 system_pods.go:61] "kindnet-vgs5z" [3a78062f-cab2-4e56-bc36-33ecf9505255] Running
	I1219 03:05:01.965611  573699 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-103644" [c5859d2c-4337-4b88-a46f-695c3ac4f9c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:05:01.965623  573699 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-103644" [57334df1-410d-4993-936c-c6cf1604c166] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:05:01.965631  573699 system_pods.go:61] "kube-proxy-lgw6f" [3b4461b1-0b30-427d-9e31-107cea049612] Running
	I1219 03:05:01.965640  573699 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-103644" [e44d65aa-7d39-4020-b9d6-4473f92a8f90] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:05:01.965653  573699 system_pods.go:61] "metrics-server-746fcd58dc-tctv8" [37ff7895-b382-407b-9032-56a428173579] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:05:01.965660  573699 system_pods.go:61] "storage-provisioner" [f12460c5-0196-4171-a44f-31b13af14f9f] Running
	I1219 03:05:01.965670  573699 system_pods.go:74] duration metric: took 4.91154ms to wait for pod list to return data ...
	I1219 03:05:01.965682  573699 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:05:01.969223  573699 default_sa.go:45] found service account: "default"
	I1219 03:05:01.969255  573699 default_sa.go:55] duration metric: took 3.563468ms for default service account to be created ...
	I1219 03:05:01.969269  573699 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:05:01.973647  573699 system_pods.go:86] 9 kube-system pods found
	I1219 03:05:01.973775  573699 system_pods.go:89] "coredns-66bc5c9577-86vsf" [d2b924f3-ac71-431b-a3e6-f85f1e0b94e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:01.973790  573699 system_pods.go:89] "etcd-default-k8s-diff-port-103644" [ececfad7-09c9-4851-9fda-c468648a6e3f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:05:01.973797  573699 system_pods.go:89] "kindnet-vgs5z" [3a78062f-cab2-4e56-bc36-33ecf9505255] Running
	I1219 03:05:01.973804  573699 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-103644" [c5859d2c-4337-4b88-a46f-695c3ac4f9c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:05:01.973810  573699 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-103644" [57334df1-410d-4993-936c-c6cf1604c166] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:05:01.973828  573699 system_pods.go:89] "kube-proxy-lgw6f" [3b4461b1-0b30-427d-9e31-107cea049612] Running
	I1219 03:05:01.973834  573699 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-103644" [e44d65aa-7d39-4020-b9d6-4473f92a8f90] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:05:01.973840  573699 system_pods.go:89] "metrics-server-746fcd58dc-tctv8" [37ff7895-b382-407b-9032-56a428173579] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:05:01.973843  573699 system_pods.go:89] "storage-provisioner" [f12460c5-0196-4171-a44f-31b13af14f9f] Running
	I1219 03:05:01.973852  573699 system_pods.go:126] duration metric: took 4.574679ms to wait for k8s-apps to be running ...
	I1219 03:05:01.973859  573699 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:05:01.973912  573699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:05:02.653061  573699 ssh_runner.go:235] Completed: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh": (1.204735295s)
	I1219 03:05:02.653137  573699 system_svc.go:56] duration metric: took 679.266214ms WaitForService to wait for kubelet
	I1219 03:05:02.653168  573699 kubeadm.go:587] duration metric: took 3.888855367s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:05:02.653197  573699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:05:02.653199  573699 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:05:02.656332  573699 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:05:02.656365  573699 node_conditions.go:123] node cpu capacity is 8
	I1219 03:05:02.656382  573699 node_conditions.go:105] duration metric: took 3.090983ms to run NodePressure ...
	I1219 03:05:02.656398  573699 start.go:242] waiting for startup goroutines ...
	I1219 03:05:05.900902  573699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.247656336s)
	I1219 03:05:05.901008  573699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:05:06.370072  573699 addons.go:500] Verifying addon dashboard=true in "default-k8s-diff-port-103644"
	I1219 03:05:06.370443  573699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103644 --format={{.State.Status}}
	I1219 03:05:06.413077  573699 out.go:179] * Verifying dashboard addon...
	I1219 03:05:02.984573  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:03.483377  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:03.983965  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:04.483784  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:04.983862  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:05.484412  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:05.985034  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:06.484458  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:06.983536  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:07.484546  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	W1219 03:05:06.463527  566718 pod_ready.go:104] pod "coredns-5dd5756b68-l88tx" is not "Ready", error: <nil>
	W1219 03:05:08.958366  566718 pod_ready.go:104] pod "coredns-5dd5756b68-l88tx" is not "Ready", error: <nil>
	I1219 03:05:05.245656  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:05.747655  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:06.245722  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:06.748049  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:07.245806  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:07.806712  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:08.317551  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:08.746359  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:09.246666  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:09.745789  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:06.432631  573699 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:05:06.442236  573699 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:05:06.442267  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:06.938273  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:07.436226  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:07.935844  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:08.437222  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:08.937396  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:09.436432  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:09.937420  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:10.436795  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:07.982775  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:08.484705  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:08.983943  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:09.483954  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:09.984850  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:10.484036  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:10.985868  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:11.484253  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:11.984283  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:12.483325  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	W1219 03:05:11.457419  566718 pod_ready.go:104] pod "coredns-5dd5756b68-l88tx" is not "Ready", error: <nil>
	W1219 03:05:13.957361  566718 pod_ready.go:104] pod "coredns-5dd5756b68-l88tx" is not "Ready", error: <nil>
	I1219 03:05:10.247114  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:10.746688  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:11.246179  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:11.747053  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:12.245687  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:12.745963  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:13.245905  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:13.745641  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:14.245181  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:14.746110  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:10.937352  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:11.436009  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:11.937001  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:12.437140  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:12.937021  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:13.436775  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:13.936272  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:14.435921  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:14.937045  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:15.436754  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:12.983838  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:13.483669  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:13.983389  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:14.483140  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:14.983678  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:15.483333  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:15.983426  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:16.483195  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:16.982683  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:17.483883  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	W1219 03:05:16.457830  566718 pod_ready.go:104] pod "coredns-5dd5756b68-l88tx" is not "Ready", error: <nil>
	W1219 03:05:18.956955  566718 pod_ready.go:104] pod "coredns-5dd5756b68-l88tx" is not "Ready", error: <nil>
	I1219 03:05:15.245238  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:15.746028  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:16.245738  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:16.746152  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:17.245944  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:17.745478  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:18.244810  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:18.745484  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:19.245267  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:19.747027  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:15.935367  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:16.437144  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:16.936697  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:17.436257  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:17.938151  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:18.436806  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:18.936368  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:19.436056  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:19.936823  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:20.436574  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:20.956728  566718 pod_ready.go:94] pod "coredns-5dd5756b68-l88tx" is "Ready"
	I1219 03:05:20.956755  566718 pod_ready.go:86] duration metric: took 16.505943894s for pod "coredns-5dd5756b68-l88tx" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:20.959784  566718 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-002036" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:20.964097  566718 pod_ready.go:94] pod "etcd-old-k8s-version-002036" is "Ready"
	I1219 03:05:20.964121  566718 pod_ready.go:86] duration metric: took 4.312579ms for pod "etcd-old-k8s-version-002036" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:20.967209  566718 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-002036" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:20.971311  566718 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-002036" is "Ready"
	I1219 03:05:20.971340  566718 pod_ready.go:86] duration metric: took 4.107095ms for pod "kube-apiserver-old-k8s-version-002036" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:20.974403  566718 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-002036" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:21.155192  566718 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-002036" is "Ready"
	I1219 03:05:21.155230  566718 pod_ready.go:86] duration metric: took 180.802142ms for pod "kube-controller-manager-old-k8s-version-002036" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:21.356374  566718 pod_ready.go:83] waiting for pod "kube-proxy-666m9" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:21.755068  566718 pod_ready.go:94] pod "kube-proxy-666m9" is "Ready"
	I1219 03:05:21.755101  566718 pod_ready.go:86] duration metric: took 398.695005ms for pod "kube-proxy-666m9" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:21.955309  566718 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-002036" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:22.355240  566718 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-002036" is "Ready"
	I1219 03:05:22.355268  566718 pod_ready.go:86] duration metric: took 399.930732ms for pod "kube-scheduler-old-k8s-version-002036" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:22.355280  566718 pod_ready.go:40] duration metric: took 17.911572961s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:05:22.403101  566718 start.go:625] kubectl: 1.35.0, cluster: 1.28.0 (minor skew: 7)
	I1219 03:05:22.405195  566718 out.go:203] 
	W1219 03:05:22.406549  566718 out.go:285] ! /usr/local/bin/kubectl is version 1.35.0, which may have incompatibilities with Kubernetes 1.28.0.
	I1219 03:05:22.407721  566718 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1219 03:05:22.409075  566718 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-002036" cluster and "default" namespace by default
	I1219 03:05:17.983934  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:18.483978  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:18.983469  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:19.483031  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:19.983935  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:20.483856  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:20.983202  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:21.482976  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:21.983682  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:22.483477  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:20.246405  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:20.745732  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:21.246211  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:21.745513  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:22.246072  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:22.746161  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:23.245454  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:23.745802  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:24.246011  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:24.745886  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:20.936632  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:21.436414  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:21.937387  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:22.438356  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:22.936036  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:23.436638  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:23.936345  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:24.436285  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:24.936343  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:25.436214  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:22.983526  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:23.483608  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:23.984007  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:24.483768  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:24.983330  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:25.483626  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:25.983245  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:26.483703  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:26.983688  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:27.483645  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:25.245298  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:25.745913  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:26.246357  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:26.746837  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:27.245727  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:27.745064  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:28.245698  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:28.745390  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:29.245749  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:29.746545  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:25.936442  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:26.436179  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:26.936807  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:27.436692  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:27.936427  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:28.436416  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:28.936100  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:29.436165  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:29.936887  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:30.437744  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:27.983729  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:28.484151  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:28.982796  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:29.483575  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:29.983807  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:30.484546  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:30.983311  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:31.482703  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:31.984055  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:32.483041  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:30.245841  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:30.746191  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:31.246984  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:31.746555  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:32.245535  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:32.745692  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:33.245430  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:33.746001  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:34.245532  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:34.745216  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:30.936806  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:31.437044  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:31.937073  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:32.436137  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:32.937365  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:33.436414  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:33.936352  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:34.435813  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:34.936438  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:35.435923  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:32.984055  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:33.483382  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:33.984500  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:34.483032  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:34.984071  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:35.482466  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:35.983161  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:36.482900  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:36.983524  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:37.483681  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:35.245754  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:35.745276  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:36.246044  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:36.747272  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:37.246098  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:37.746535  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:38.245821  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:38.745937  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:39.245762  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:39.745615  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:35.936381  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:36.436916  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:36.936622  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:37.436000  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:37.937259  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:38.437162  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:38.937047  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:39.437352  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:39.936682  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:40.436615  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:37.983600  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:38.483773  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:38.983567  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:39.483752  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:39.983264  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:40.483703  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:40.983322  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:41.483362  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:41.983957  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:42.484274  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:40.246185  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:40.745901  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:41.245621  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:41.746459  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:42.246128  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:42.745336  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:43.245863  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:43.745349  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:44.245848  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:44.745183  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:40.938808  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:41.437447  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:41.936560  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:42.436119  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:42.935681  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:43.436727  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:43.936625  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:44.436379  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:44.936023  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:45.436852  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:42.983002  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:43.484428  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:43.983133  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:44.484439  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:44.983087  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:45.483617  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:45.983057  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:46.483126  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:46.982743  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:47.483122  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:45.245621  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:45.747099  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:46.245089  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:46.746901  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:47.245684  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:47.745166  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:48.245353  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:48.745700  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:49.245083  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:49.745319  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:45.936637  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:46.436382  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:46.935972  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:47.436262  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:47.937175  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:48.435775  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:48.936174  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:49.436927  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:49.936454  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:50.436467  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:47.983769  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:48.483797  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:48.983564  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:49.484562  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:49.983390  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:50.483073  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:50.984121  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:51.482952  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:51.983943  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:52.483850  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:50.245533  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:50.746378  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:51.246407  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:51.746164  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:52.245475  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:52.746473  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:53.245686  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:53.745616  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:54.246701  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:54.746221  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:50.937461  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:51.437102  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:51.937100  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:52.436658  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:52.936554  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:53.436723  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:53.935859  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:54.436301  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:54.936888  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:55.435693  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:52.983429  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:53.484287  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:53.983438  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:54.484255  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:54.982975  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:55.483937  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:55.984116  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:56.483076  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:56.982483  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:57.484172  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:55.245635  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:55.746068  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:56.245613  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:56.746431  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:57.245784  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:57.746179  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:58.246036  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:58.745916  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.246105  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.745511  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:55.936404  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:56.436974  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:56.937181  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:57.436933  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:57.936461  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:58.435893  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:58.936715  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.435977  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.936537  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:00.436413  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:57.984117  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:58.483494  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:58.983431  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.483144  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.983693  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:00.483725  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:00.983769  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:01.483568  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:01.983844  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:02.484041  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:00.247210  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:00.745901  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:01.246917  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:01.746507  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:02.246482  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:02.745791  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:03.246149  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:03.745750  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:04.246542  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:04.746182  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:00.935753  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:01.437052  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:01.936399  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:02.437035  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:02.936175  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:03.437157  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:03.936167  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:04.437079  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:04.936622  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:05.435994  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:02.984052  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:03.484159  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:03.983491  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:04.483027  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:04.984206  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:05.482988  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:05.984416  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:06.482988  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:06.983673  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:07.483363  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:05.245974  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:05.745879  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:06.246325  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:06.746954  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:07.246178  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:07.746530  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:08.246617  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:08.746319  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:09.246086  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:09.745852  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:05.937050  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:06.436626  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:06.935960  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:07.436359  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:07.936462  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:08.436428  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:08.936121  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:09.436717  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:09.936724  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:10.436653  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:07.983609  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:08.483348  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:08.983602  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:09.483970  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:09.984565  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:10.483846  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:10.983764  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:11.483646  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:11.983995  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:12.483230  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:10.246294  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:10.746747  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:11.245812  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:11.746679  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:12.246641  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:12.745478  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:13.245869  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:13.745759  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:14.245568  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:14.746073  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:10.936517  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:11.435795  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:11.937696  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:12.436353  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:12.935510  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:13.436005  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:13.936614  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:14.436666  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:14.937104  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:15.436494  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:12.982961  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:13.483812  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:13.984205  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:14.484367  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:14.983535  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:15.483245  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:15.982974  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:16.483840  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:16.983639  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:17.483076  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:15.245741  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:15.746076  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:16.245863  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:16.746268  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:17.245914  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:17.745460  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:18.246201  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:18.745720  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:19.246075  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:19.746406  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:15.936573  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:16.436355  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:16.935609  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:17.436112  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:17.936695  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:18.436177  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:18.936615  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:19.436180  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:19.936693  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:20.436473  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:17.984187  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:18.484214  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:18.983011  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:19.483899  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:19.984512  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:20.482716  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:20.983406  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:21.483536  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:21.985122  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:22.483290  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:20.246645  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:20.746554  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:21.245477  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:21.746237  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:22.246559  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:22.746156  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:23.245694  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:23.744920  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:24.246400  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:24.745171  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:20.936301  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:21.435818  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:21.936473  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:22.436319  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:22.937291  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:23.436967  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:23.936394  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:24.436573  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:24.936226  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:25.436480  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:22.983215  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:23.483166  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:23.983561  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:24.484067  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:24.983180  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:25.483488  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:25.983441  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:26.482752  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:26.983544  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:27.482808  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:25.245475  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:25.746511  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:26.245967  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:26.746303  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:27.245996  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:27.745286  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:28.246778  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:28.745279  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:29.245781  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:29.745086  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:25.936101  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:26.437131  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:26.936600  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:27.436041  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:27.937177  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:28.437421  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:28.935735  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:29.436019  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:29.936601  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:30.437190  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:27.984252  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:28.483837  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:28.983552  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:29.483241  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:29.983514  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:30.482704  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:30.983246  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:31.482944  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:31.984320  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:32.483797  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:30.246209  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:30.745803  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:31.245503  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:31.746263  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:32.246768  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:32.745863  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:33.245185  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:33.745549  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:34.245747  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:34.746416  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:30.935759  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:31.435954  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:31.936473  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:32.436706  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:32.936420  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:33.436605  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:33.937043  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:34.437152  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:34.936804  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:35.436211  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:32.983286  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:33.483036  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:33.984179  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:34.485767  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:34.983683  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:35.484067  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:35.983146  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:36.483037  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:36.982889  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:37.483748  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:35.245980  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:35.745904  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:36.246066  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:36.747073  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:37.246061  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:37.746010  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:38.246278  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:38.745926  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:39.245654  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:39.745463  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:35.935859  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:36.437530  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:36.936253  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:37.436942  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:37.936253  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:38.437229  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:38.936794  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:39.436501  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:39.936447  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:40.436258  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:37.983789  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:38.483692  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:38.983255  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:39.483001  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:39.982877  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:40.483721  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:40.983399  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:41.482771  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:41.983968  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:42.483847  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:40.246603  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:40.745229  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:41.245985  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:41.746233  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:42.246354  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:42.746354  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:43.245729  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:43.745993  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:44.246174  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:44.745977  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:40.936569  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:41.436604  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:41.936997  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:42.436608  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:42.936332  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:43.436076  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:43.937096  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:44.437052  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:44.936644  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:45.436313  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:42.983561  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:43.483231  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:43.983328  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:44.483130  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:44.983671  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:45.484255  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:45.984498  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:46.483267  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:46.982818  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:47.483172  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:45.246007  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:45.745814  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:46.246281  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:46.746636  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:47.245338  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:47.746505  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:48.246541  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:48.745349  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:49.246003  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:49.746025  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:45.935627  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:46.437425  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:46.937256  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:47.436775  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:47.936905  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:48.436271  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:48.936170  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:49.436681  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:49.937261  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:50.436230  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:47.983908  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:48.483646  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:48.983761  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:49.483697  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:49.983928  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:50.484339  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:50.983038  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:51.483830  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:51.983519  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:52.482841  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:50.246203  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:50.745909  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:51.245212  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:51.746317  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:52.246429  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:52.746706  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:53.245252  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:53.746054  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:54.248935  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:54.745879  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:50.936569  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:51.436150  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:51.937541  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:52.436306  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:52.937380  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:53.437032  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:53.937256  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:54.437101  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:54.936394  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:55.435707  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:52.983425  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:53.482996  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:53.984413  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:54.483150  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:54.983223  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:55.483220  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:55.983167  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:56.482640  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:56.983417  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:57.483783  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:55.245215  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:55.745981  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:56.246277  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:56.747053  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:57.245861  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:57.745707  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:58.245371  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:58.746200  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:59.245515  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:59.745912  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:55.936135  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:56.437841  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:56.936910  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:57.436323  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:57.936660  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:58.436524  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:58.936221  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:59.436563  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:59.935913  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:00.436645  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:57.984125  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:58.483388  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:58.982737  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:59.483773  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:59.983545  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:00.483422  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:00.983154  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:01.483664  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:01.983641  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:02.483442  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:00.245728  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:00.745308  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:01.246025  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:01.745765  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:02.246408  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:02.746848  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:03.245127  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:03.746104  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:04.246223  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:04.745981  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:00.936306  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:01.437231  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:01.937148  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:02.437052  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:02.936729  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:03.436019  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:03.936896  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:04.436656  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:04.936521  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:05.435899  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:02.983225  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:03.483720  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:03.983254  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:04.483258  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:04.983295  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:05.483964  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:05.984348  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:06.483161  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:06.983777  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:07.483360  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:05.245839  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:05.745805  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:06.245976  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:06.745981  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:07.245994  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:07.745694  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:08.245465  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:08.748052  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:09.245632  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:09.745648  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:05.936748  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:06.436721  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:06.935970  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:07.436670  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:07.936857  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:08.436351  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:08.936092  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:09.436265  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:09.936566  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:10.437204  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:07.983552  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:08.483025  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:08.984084  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:09.482696  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:09.984384  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:10.482907  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:10.983542  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:11.483867  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:11.983960  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:12.484193  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:10.246433  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:10.745045  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:11.245844  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:11.745925  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:12.245788  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:12.745757  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:13.245844  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:13.744949  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:14.245762  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:14.745558  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:10.936675  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:11.436272  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:11.937272  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:12.436971  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:12.936377  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:13.435972  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:13.936779  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:14.436521  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:14.936619  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:15.436449  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:12.983555  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:13.483112  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:13.983119  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:14.483571  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:14.983564  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:15.483968  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:15.985107  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:16.482973  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:16.983852  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:17.483706  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:15.246146  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:15.745442  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:16.245478  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:16.745851  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:17.245620  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:17.745179  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:18.245868  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:18.746515  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:19.245146  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:19.746353  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:15.937638  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:16.436053  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:16.936310  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:17.436971  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:17.936846  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:18.436790  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:18.936696  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:19.436200  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:19.936118  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:20.437161  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:17.983286  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:18.483618  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:18.983321  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:19.484098  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:19.982957  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:20.484192  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:20.982797  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:21.483503  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:21.983073  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:22.483344  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:20.246025  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:20.745956  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:21.245670  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:21.745542  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:22.245486  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:22.745743  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:23.246417  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:23.746516  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:24.245958  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:24.746331  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:20.936566  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:21.435804  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:21.936340  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:22.436902  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:22.936275  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:23.437058  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:23.936691  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:24.436512  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:24.936664  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:25.436248  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:22.983164  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:23.483224  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:23.983637  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:24.483793  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:24.983642  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:25.484002  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:25.983546  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:26.483485  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:26.983175  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:27.483045  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:25.246376  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:25.746128  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:26.246162  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:26.747301  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:27.245957  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:27.745993  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:28.245413  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:28.746300  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:29.246016  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:29.745826  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:25.936102  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:26.436787  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:26.936813  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:27.436289  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:27.937146  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:28.437238  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:28.937126  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:29.436740  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:29.936271  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:30.437515  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:27.983720  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:28.484073  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:28.982865  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:29.483679  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:29.983626  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:30.484049  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:30.983790  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:31.483561  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:31.983415  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:32.483614  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:30.245267  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:30.746185  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:31.246095  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:31.746548  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:32.245436  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:32.745151  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:33.246297  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:33.746437  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:34.246245  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:34.746404  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:30.936298  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:31.436831  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:31.936714  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:32.436596  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:32.936067  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:33.436898  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:33.936839  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:34.436572  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:34.936153  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:35.436037  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:32.983547  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:33.483091  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:33.983273  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:34.483523  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:34.982933  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:35.483553  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:35.983907  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:36.484242  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:36.983005  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:37.483666  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:35.245928  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:35.745885  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:36.245358  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:36.747236  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:37.245813  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:37.745544  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:38.245252  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:38.746445  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:39.245380  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:39.746275  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:35.936921  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:36.436862  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:36.936442  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:37.437100  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:37.936746  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:38.436661  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:38.936108  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:39.436741  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:39.937134  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:40.437138  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:37.984072  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:38.483408  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:38.982980  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:39.483839  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:39.983815  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:40.484237  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:40.982748  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:41.483227  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:41.983491  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:42.483502  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:40.246302  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:40.746840  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:41.245743  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:41.745752  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:42.245764  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:42.745565  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:43.245413  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:43.745818  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:44.245622  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:44.746548  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:40.936117  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:41.436793  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:41.937328  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:42.436385  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:42.937184  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:43.437161  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:43.936755  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:44.436384  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:44.937437  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:45.436119  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:42.983935  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:43.483872  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:43.983964  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:44.484354  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:44.983693  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:45.483534  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:45.983273  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:46.483358  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:46.983949  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:47.483681  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:45.245051  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:45.745840  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:46.245710  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:46.747059  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:47.245761  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:47.746224  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:48.245979  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:48.746397  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:49.246462  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:49.745161  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:45.936393  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:46.435574  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:46.936269  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:47.436736  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:47.935923  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:48.436191  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:48.937125  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:49.436724  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:49.936060  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:50.436464  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:47.983875  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:48.483646  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:48.983702  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:49.483743  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:49.983649  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:50.484353  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:50.984025  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:51.484106  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:51.983289  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:52.483003  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:50.245241  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:50.746800  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:51.245636  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:51.745903  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:52.245501  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:52.746786  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:53.245828  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:53.746731  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:54.245243  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:54.746109  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:50.936423  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:51.436185  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:51.937335  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:52.435811  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:52.936607  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:53.437193  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:53.937024  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:54.436703  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:54.936452  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:55.436033  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:52.982921  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:53.483790  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:53.984334  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:54.483331  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:54.983338  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:55.483024  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:55.983619  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:56.483807  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:56.983721  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:57.483219  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:55.245525  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:55.745310  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:56.246066  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:56.748380  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:57.246087  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:57.746200  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:58.246172  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:58.746116  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:59.246000  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:59.745364  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:55.938959  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:56.436375  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:56.936439  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:57.435973  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:57.936388  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:58.435955  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:58.937067  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:59.436689  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:59.936873  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:00.436068  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:57.983216  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:58.483545  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:58.982893  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:59.483703  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:59.983507  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:00.483848  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:00.983741  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:01.483139  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:01.982982  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:02.483474  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:00.245849  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:00.745943  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:01.245514  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:01.745976  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:02.245776  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:02.745774  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:03.246195  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:03.746287  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:04.246174  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:04.745795  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:00.937291  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:01.436517  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:01.937126  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:02.437088  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:02.936378  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:03.435816  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:03.936486  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:04.436861  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:04.936773  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:05.437070  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:02.983196  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:03.482648  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:03.984096  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:04.483607  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:04.983532  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:05.483828  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:05.983686  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:06.484218  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:06.984889  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:07.484117  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:05.245432  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:05.746171  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:06.246148  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:06.746794  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:07.245134  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:07.745858  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:08.245332  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:08.746680  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:09.245744  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:09.745345  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:05.935722  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:06.437147  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:06.937110  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:07.436107  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:07.936683  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:08.437338  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:08.937224  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:09.435895  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:09.936364  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:10.436440  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:07.984241  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:08.483451  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:08.983165  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:09.483042  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:09.982951  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:10.484340  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:10.983004  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:11.483822  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:11.983489  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:12.483877  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:10.246451  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:10.746155  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:11.246021  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:11.745725  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:12.245017  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:12.747153  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:13.246746  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:13.745692  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:14.245869  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:14.745814  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:10.937288  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:11.436218  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:11.937058  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:12.436201  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:12.936942  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:13.436514  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:13.937227  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:14.435900  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:14.937246  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:15.437248  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:12.983685  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:13.483319  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:13.983759  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:14.483672  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:14.983171  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:15.482646  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:15.983174  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:16.483545  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:16.983864  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:17.484102  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:15.245723  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:15.745561  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:16.247817  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:16.747200  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:17.246180  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:17.746059  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:18.245772  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:18.746003  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:19.245769  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:19.745631  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:15.935465  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:16.436710  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:16.936296  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:17.436222  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:17.937015  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:18.437019  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:18.937083  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:19.436796  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:19.936995  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:20.437457  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:17.983571  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:18.483942  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:18.983638  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:19.483595  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:19.982839  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:20.484503  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:20.983773  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:21.483765  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:21.983647  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:22.483706  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:20.246047  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:20.746223  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:21.245764  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:21.746404  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:22.246013  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:22.745963  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:23.245843  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:23.745567  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:24.246427  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:24.746391  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:20.937102  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:21.435564  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:21.936469  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:22.436649  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:22.936880  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:23.436778  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:23.936059  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:24.437189  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:24.937170  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:25.436704  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:22.982868  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:23.484268  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:23.983374  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:24.483212  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:24.983344  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:25.483884  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:25.983398  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:26.484023  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:26.984234  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:27.483988  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:25.246093  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:25.745866  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:26.245647  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:26.747173  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:27.245862  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:27.745538  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:28.245299  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:28.746103  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:29.245506  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:29.746350  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:25.937269  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:26.435729  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:26.936734  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:27.436476  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:27.936918  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:28.436636  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:28.936510  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:29.436255  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:29.936175  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:30.436005  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:27.983312  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:28.484050  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:28.983339  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:29.482531  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:29.982929  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:30.483747  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:30.983500  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:31.482861  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:31.983253  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:32.484296  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:30.245816  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:30.745632  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:31.245311  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:31.746323  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:32.246307  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:32.746634  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:33.245352  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:33.746294  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:34.246399  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:34.746747  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:30.937031  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:31.436676  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:31.936840  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:32.436650  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:32.936793  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:33.436310  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:33.936030  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:34.437178  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:34.937165  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:35.436157  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:32.983447  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:33.484087  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:33.983935  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:34.484195  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:34.982889  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:35.483424  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:35.982827  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:36.483920  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:36.984144  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:37.484302  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:35.245293  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:35.746004  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:36.245793  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:36.746989  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:37.245794  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:37.746839  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:38.245459  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:38.746688  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:39.245861  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:39.745472  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:35.937370  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:36.435903  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:36.936747  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:37.436447  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:37.937054  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:38.437019  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:38.937481  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:39.436189  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:39.936333  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:40.436131  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:37.983136  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:38.484093  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:38.983753  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:39.483392  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:39.983335  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:40.483238  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:40.982643  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:41.483017  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:41.983148  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:42.484213  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:40.245696  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:40.745797  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:41.245831  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:41.745795  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:42.245558  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:42.745449  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:43.246006  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:43.746105  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:44.246305  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:44.746990  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:40.936241  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:41.436869  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:41.936851  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:42.436552  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:42.936544  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:43.436217  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:43.936790  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:44.435881  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:44.937211  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:45.435800  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:42.982609  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:43.483184  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:43.984245  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:44.483444  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:44.983516  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:45.482273  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:45.982784  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:46.483318  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:46.983225  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:47.484299  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:45.245057  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:45.745991  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:46.245705  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:46.746558  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:47.245976  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:47.745649  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:48.245488  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:48.745691  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:49.245062  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:49.745495  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:45.936018  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:46.437324  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:46.936366  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:47.436108  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:47.936330  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:48.435727  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:48.936825  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:49.436120  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:49.937117  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:50.436125  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:47.986039  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:48.483907  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:48.983409  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:49.483362  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:49.982827  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:50.483536  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:50.983035  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:51.483293  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:51.983566  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:52.483534  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:50.246060  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:50.746141  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:51.245517  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:51.745461  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:52.246136  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:52.746190  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:53.246005  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:53.745779  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:54.245690  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:54.746440  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:50.936858  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:51.436399  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:51.936936  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:52.436270  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:52.936040  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:53.436627  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:53.935956  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:54.436964  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:54.937066  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:55.437181  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:52.982975  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:53.483774  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:53.983188  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:54.484313  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:54.983476  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:55.483624  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:55.983235  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:56.484059  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:56.983666  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:57.483836  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:55.246365  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:55.746334  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:56.246033  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:56.746651  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:57.245323  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:57.746357  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:58.245635  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:58.745658  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:59.245395  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:59.745819  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:55.936516  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:56.436483  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:56.936444  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:57.436852  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:57.936892  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:58.436633  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:58.936620  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:59.436269  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:59.936896  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:00.436566  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:57.983297  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:58.484464  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:58.982982  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:59.483511  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:59.982836  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:00.483736  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:00.983424  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:01.483308  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:01.982575  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:02.483472  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:00.245397  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:00.746693  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:01.245417  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:01.745772  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:02.245937  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:02.745980  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:03.245966  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:03.745540  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:04.245125  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:04.746311  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:00.937461  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:01.436345  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:01.937223  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:02.436491  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:02.936542  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:03.436156  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:03.936757  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:04.436434  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:04.936827  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:05.437143  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:02.983140  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:03.483948  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:03.983404  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:04.484135  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:04.983017  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:05.483191  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:05.983258  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:06.483593  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:06.982879  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:07.482719  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:05.245937  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:05.745523  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:06.246156  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:06.746714  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:07.245457  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:07.745845  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:08.245496  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:08.745521  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:09.246211  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:09.745647  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:05.936297  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:06.435928  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:06.936499  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:07.435693  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:07.935885  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:08.436830  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:08.937053  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:09.436174  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:09.936555  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:10.436004  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:07.983540  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:08.483013  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:08.983280  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:09.483326  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:09.983039  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:10.483498  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:10.983057  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:11.483944  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:11.983380  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:12.483057  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:10.246452  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:10.746248  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:11.246124  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:11.746214  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:12.245557  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:12.746434  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:13.245268  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:13.746177  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:14.245924  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:14.747881  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:10.936969  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:11.436315  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:11.936145  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:12.435740  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:12.937011  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:13.437024  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:13.935613  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:14.436125  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:14.937024  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:15.436909  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:12.984340  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:13.483254  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:13.984703  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:14.483313  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:14.982835  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:15.483493  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:15.982869  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:16.483978  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:16.983946  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:17.483204  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:15.245275  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:15.746276  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:16.245920  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:16.746771  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:17.245651  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:17.746110  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:18.245525  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:18.744791  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:19.245637  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:19.745922  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:15.936545  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:16.436043  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:16.937153  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:17.435953  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:17.937080  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:18.435670  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:18.936110  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:19.435657  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:19.935804  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:20.436240  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:17.983897  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:18.483952  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:18.984052  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:19.484088  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:19.983714  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:20.483215  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:20.983277  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:21.483667  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:21.982875  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:22.483370  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:20.245437  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:20.745749  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:21.246263  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:21.746404  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:22.245277  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:22.745620  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:23.245283  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:23.745807  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:24.245525  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:24.745496  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:20.935998  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:21.436702  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:21.936853  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:22.436414  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:22.936508  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:23.435898  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:23.938866  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:24.436406  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:24.936267  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:25.435443  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:22.983387  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:23.483176  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:23.984078  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:24.483842  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:24.983908  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:25.483314  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:25.983685  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:26.482841  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:26.984025  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:27.483709  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:25.246278  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:25.746235  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:26.246283  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:26.746411  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:27.246592  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:27.745927  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:28.245680  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:28.745389  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:29.246386  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:29.745671  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:25.936495  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:26.436178  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:26.937066  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:27.435968  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:27.936852  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:28.436035  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:28.936880  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:29.436057  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:29.936860  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:30.436717  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:27.983478  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:28.483606  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:28.984122  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:29.490050  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:29.982603  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:30.483055  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:30.984015  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:31.483501  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:31.982832  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:32.483241  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:30.245020  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:30.745924  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:31.245930  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:31.745911  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:32.245548  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:32.745201  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:33.245713  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:33.745983  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:34.245893  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:34.745539  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:30.935985  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:31.436747  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:31.936740  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:32.436110  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:32.937088  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:33.436764  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:33.936466  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:34.436043  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:34.936724  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:35.436386  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:32.983173  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:33.483859  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:33.983142  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:34.483205  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:34.984166  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:35.483826  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:35.983185  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:36.484158  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:36.984358  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:37.482832  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:35.246393  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:35.745896  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:36.245850  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:36.746287  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:37.246273  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:37.747864  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:38.245616  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:38.745334  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:39.246449  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:39.744981  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:35.936971  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:36.436804  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:36.936804  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:37.436958  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:37.936877  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:38.436656  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:38.936136  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:39.435670  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:39.935933  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:40.436774  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:37.983744  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:38.482921  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:38.983872  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:39.483540  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:39.984141  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:40.483479  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:40.984063  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:41.483205  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:41.983552  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:42.483481  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:40.245548  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:40.746558  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:41.246611  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:41.745533  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:42.245131  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:42.746326  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:43.246887  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:43.745358  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:44.246189  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:44.745991  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:40.937573  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:41.435677  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:41.936406  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:42.435935  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:42.936714  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:43.436043  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:43.936827  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:44.435885  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:44.936556  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:45.436774  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:42.983487  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:43.483622  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:43.983361  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:44.482912  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:44.983873  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:45.482660  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:45.982839  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:46.483503  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:46.983067  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:47.483638  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:45.245846  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:45.746643  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:46.245931  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:46.746121  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:47.246355  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:47.745777  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:48.245928  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:48.745620  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:49.246014  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:49.745623  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:45.936490  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:46.437169  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:46.936638  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:47.435797  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:47.937106  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:48.436462  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:48.935673  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:49.435921  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:49.936345  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:50.435704  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:47.983064  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:48.483495  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:48.983383  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:49.482815  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:49.983133  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:50.483521  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:50.983458  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:51.483539  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:51.982669  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:52.482740  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:50.245254  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:50.746529  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:51.246403  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:51.746576  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:52.245194  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:52.745901  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:53.245791  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:53.745384  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:54.246056  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:54.745809  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:50.936502  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:51.436533  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:51.936298  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:52.436872  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:52.936965  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:53.436624  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:53.936645  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:54.435868  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:54.936019  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:55.436761  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:52.984260  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:53.483436  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:53.983307  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:54.482909  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:54.983837  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:55.482909  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:55.983703  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:56.483097  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:56.984370  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:57.483476  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:55.245416  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:55.745596  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:56.246315  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:56.746972  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:57.246432  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:57.746169  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:58.245899  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:58.745701  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:59.246684  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:59.746013  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:55.936103  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:56.436731  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:56.936130  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:57.436934  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:57.936650  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:58.435890  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:58.936552  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:59.436324  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:59.936567  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:00.436613  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:57.982857  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:58.483173  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:58.984076  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:59.483622  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:59.983152  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:00.483700  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:00.983409  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:01.483248  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:01.983111  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:02.483698  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:00.245724  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:00.746426  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:01.245360  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:01.746680  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:02.245174  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:02.746009  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:03.246343  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:03.746019  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:04.245779  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:04.745882  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:00.935947  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:01.437327  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:01.937129  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:02.436468  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:02.936473  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:03.436333  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:03.936134  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:04.436385  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:04.937151  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:05.437232  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:02.983942  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:03.483661  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:03.983172  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:04.483536  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:04.983253  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:05.483439  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:05.982645  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:06.483045  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:06.984031  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:07.483303  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:05.245641  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:05.745823  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:06.245494  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:06.746765  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:07.245879  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:07.745869  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:08.245211  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:08.746263  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:09.246504  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:09.744996  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:05.936844  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:06.436478  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:06.935984  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:07.436742  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:07.935862  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:08.436143  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:08.936623  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:09.437102  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:09.936964  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:10.436154  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:07.983001  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:08.483616  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:08.983409  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:09.483478  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:09.982888  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:10.483505  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:10.983487  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:11.482828  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:11.982887  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:12.483514  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:10.245552  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:10.745120  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:11.246143  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:11.746163  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:12.245633  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:12.745368  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:13.246475  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:13.745271  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:14.245933  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:14.745805  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:10.935671  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:11.436335  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:11.936196  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:12.436273  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:12.936625  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:13.436266  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:13.936782  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:14.436189  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:14.936448  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:15.436442  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:12.983418  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:13.483281  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:13.983117  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:14.483767  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:14.984021  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:15.483731  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:15.983275  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:16.483869  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:16.983375  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:17.482882  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:15.245668  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:15.746147  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:16.246640  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:16.746736  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:17.246420  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:17.745966  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:18.246253  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:18.745906  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:19.246303  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:19.745986  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:15.937381  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:16.436018  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:16.936466  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:17.436852  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:17.936227  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:18.437410  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:18.935713  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:19.436449  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:19.935644  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:20.435982  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:17.983311  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:18.483558  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:18.983528  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:19.483170  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:19.984155  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:20.483754  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:20.983412  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:21.483938  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:21.983465  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:22.483020  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:20.245720  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:20.745374  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:21.246756  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:21.745755  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:22.245418  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:22.746818  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:23.245897  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:23.745485  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:24.245161  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:24.746048  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:20.936177  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:21.437209  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:21.936770  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:22.436342  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:22.936061  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:23.436819  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:23.935988  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:24.436564  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:24.935683  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:25.437297  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:22.983512  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:23.484563  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:23.983146  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:24.483790  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:24.983839  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:25.483026  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:25.983149  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:26.484482  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:26.983378  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:27.482721  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:25.246464  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:25.746065  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:26.246367  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:26.746647  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:27.245786  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:27.746272  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:28.245936  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:28.745748  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:29.245512  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:29.745830  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:25.936845  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:26.436141  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:26.937290  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:27.437316  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:27.936601  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:28.435947  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:28.936694  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:29.436517  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:29.936790  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:30.436457  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:27.983105  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:28.483646  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:28.983252  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:29.483908  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:29.983724  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:30.483864  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:30.983787  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:31.483233  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:31.983574  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:32.482995  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:30.245383  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:30.746128  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:31.246198  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:31.746431  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:32.246100  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:32.746119  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:33.246290  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:33.746036  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:34.245863  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:34.745323  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:30.936983  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:31.437037  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:31.936606  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:32.435930  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:32.936507  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:33.436189  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:33.936455  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:34.435839  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:34.935933  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:35.436995  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:32.984129  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:33.484212  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:33.984303  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:34.483306  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:34.983518  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:35.482738  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:35.982612  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:36.483504  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:36.983434  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:37.482971  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:35.246296  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:35.746349  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:36.247475  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:36.746626  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:37.246070  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:37.746520  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:38.245142  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:38.745887  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:39.245695  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:39.745960  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:35.936318  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:36.435767  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:36.936550  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:37.436719  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:37.935917  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:38.435988  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:38.936787  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:39.436849  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:39.935749  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:40.436170  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:37.983100  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:38.483708  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:38.983679  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:39.483567  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:39.983364  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:40.483546  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:40.983622  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:41.484178  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:41.983532  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:42.482768  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:40.245506  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:40.745743  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:41.246985  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:41.746088  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:42.245673  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:42.746257  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:43.246242  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:43.745638  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:44.246113  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:44.745493  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:40.936613  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:41.436769  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:41.937022  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:42.436509  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:42.936170  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:43.436799  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:43.935953  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:44.436096  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:44.936230  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:45.436315  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:42.983678  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:43.482661  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:43.984210  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:44.482755  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:44.983557  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:45.483535  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:45.982947  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:46.483792  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:46.983100  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:47.484233  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:45.246539  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:45.746528  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:46.246174  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:46.746739  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:47.245697  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:47.745790  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:48.245070  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:48.745400  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:49.246339  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:49.745958  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:45.935928  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:46.436394  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:46.936522  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:47.436247  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:47.936524  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:48.436518  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:48.936708  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:49.437978  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:49.936041  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:50.436496  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:47.982762  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:48.483205  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:48.983515  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:49.483024  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:49.982989  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:50.483821  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:50.983511  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:51.482875  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:51.983288  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:52.483464  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:50.245892  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:50.745342  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:51.246251  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:51.746455  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:52.246114  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:52.745679  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:53.243066  568301 kapi.go:81] temporary error: getting Pods with label selector "app.kubernetes.io/name=kubernetes-dashboard-web" : [client rate limiter Wait returned an error: context deadline exceeded]
	I1219 03:10:53.243101  568301 kapi.go:107] duration metric: took 6m0.001125868s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	W1219 03:10:53.243227  568301 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [waiting for app.kubernetes.io/name=kubernetes-dashboard-web pods: context deadline exceeded]
	I1219 03:10:53.244995  568301 out.go:179] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I1219 03:10:53.246175  568301 addons.go:546] duration metric: took 6m5.940868392s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I1219 03:10:53.246216  568301 start.go:247] waiting for cluster config update ...
	I1219 03:10:53.246230  568301 start.go:256] writing updated cluster config ...
	I1219 03:10:53.246533  568301 ssh_runner.go:195] Run: rm -f paused
	I1219 03:10:53.251613  568301 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:10:53.256756  568301 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qmb9z" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:53.261260  568301 pod_ready.go:94] pod "coredns-66bc5c9577-qmb9z" is "Ready"
	I1219 03:10:53.261285  568301 pod_ready.go:86] duration metric: took 4.502294ms for pod "coredns-66bc5c9577-qmb9z" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:53.263432  568301 pod_ready.go:83] waiting for pod "etcd-embed-certs-536489" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:53.267796  568301 pod_ready.go:94] pod "etcd-embed-certs-536489" is "Ready"
	I1219 03:10:53.267819  568301 pod_ready.go:86] duration metric: took 4.363443ms for pod "etcd-embed-certs-536489" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:53.269959  568301 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-536489" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:53.273954  568301 pod_ready.go:94] pod "kube-apiserver-embed-certs-536489" is "Ready"
	I1219 03:10:53.273978  568301 pod_ready.go:86] duration metric: took 3.994974ms for pod "kube-apiserver-embed-certs-536489" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:53.276324  568301 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-536489" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:53.655995  568301 pod_ready.go:94] pod "kube-controller-manager-embed-certs-536489" is "Ready"
	I1219 03:10:53.656024  568301 pod_ready.go:86] duration metric: took 379.67922ms for pod "kube-controller-manager-embed-certs-536489" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:53.856274  568301 pod_ready.go:83] waiting for pod "kube-proxy-qhlhx" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:54.256232  568301 pod_ready.go:94] pod "kube-proxy-qhlhx" is "Ready"
	I1219 03:10:54.256260  568301 pod_ready.go:86] duration metric: took 399.957557ms for pod "kube-proxy-qhlhx" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:54.456456  568301 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-536489" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:54.856278  568301 pod_ready.go:94] pod "kube-scheduler-embed-certs-536489" is "Ready"
	I1219 03:10:54.856307  568301 pod_ready.go:86] duration metric: took 399.821962ms for pod "kube-scheduler-embed-certs-536489" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:54.856318  568301 pod_ready.go:40] duration metric: took 1.60467121s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:10:54.908914  568301 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 03:10:54.910224  568301 out.go:179] * Done! kubectl is now configured to use "embed-certs-536489" cluster and "default" namespace by default
	I1219 03:10:50.936043  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:51.437199  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:51.937554  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:52.436648  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:52.935325  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:53.437090  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:53.936467  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:54.435747  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:54.937514  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:55.437259  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:52.983483  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:53.483110  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:53.984179  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:54.483441  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:54.983571  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:55.482976  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:55.983723  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:56.483799  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:56.983265  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:57.482795  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:57.980094  569947 kapi.go:107] duration metric: took 6m0.000564024s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	W1219 03:10:57.980271  569947 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [waiting for app.kubernetes.io/name=kubernetes-dashboard-web pods: context deadline exceeded]
	I1219 03:10:57.982221  569947 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1219 03:10:57.983556  569947 addons.go:546] duration metric: took 6m7.330731268s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1219 03:10:57.983643  569947 start.go:247] waiting for cluster config update ...
	I1219 03:10:57.983661  569947 start.go:256] writing updated cluster config ...
	I1219 03:10:57.983965  569947 ssh_runner.go:195] Run: rm -f paused
	I1219 03:10:57.988502  569947 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:10:57.993252  569947 pod_ready.go:83] waiting for pod "coredns-7d764666f9-hm5hz" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:57.997922  569947 pod_ready.go:94] pod "coredns-7d764666f9-hm5hz" is "Ready"
	I1219 03:10:57.997946  569947 pod_ready.go:86] duration metric: took 4.66305ms for pod "coredns-7d764666f9-hm5hz" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:58.000317  569947 pod_ready.go:83] waiting for pod "etcd-no-preload-208281" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:58.004843  569947 pod_ready.go:94] pod "etcd-no-preload-208281" is "Ready"
	I1219 03:10:58.004871  569947 pod_ready.go:86] duration metric: took 4.527165ms for pod "etcd-no-preload-208281" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:58.006889  569947 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-208281" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:58.010814  569947 pod_ready.go:94] pod "kube-apiserver-no-preload-208281" is "Ready"
	I1219 03:10:58.010843  569947 pod_ready.go:86] duration metric: took 3.912426ms for pod "kube-apiserver-no-preload-208281" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:58.012893  569947 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-208281" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:58.394606  569947 pod_ready.go:94] pod "kube-controller-manager-no-preload-208281" is "Ready"
	I1219 03:10:58.394643  569947 pod_ready.go:86] duration metric: took 381.720753ms for pod "kube-controller-manager-no-preload-208281" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:58.594310  569947 pod_ready.go:83] waiting for pod "kube-proxy-xst8w" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:58.994002  569947 pod_ready.go:94] pod "kube-proxy-xst8w" is "Ready"
	I1219 03:10:58.994037  569947 pod_ready.go:86] duration metric: took 399.698104ms for pod "kube-proxy-xst8w" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:59.194965  569947 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-208281" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:59.594191  569947 pod_ready.go:94] pod "kube-scheduler-no-preload-208281" is "Ready"
	I1219 03:10:59.594219  569947 pod_ready.go:86] duration metric: took 399.226469ms for pod "kube-scheduler-no-preload-208281" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:59.594230  569947 pod_ready.go:40] duration metric: took 1.605690954s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:10:59.642421  569947 start.go:625] kubectl: 1.35.0, cluster: 1.35.0-rc.1 (minor skew: 0)
	I1219 03:10:59.644674  569947 out.go:179] * Done! kubectl is now configured to use "no-preload-208281" cluster and "default" namespace by default
	I1219 03:10:55.937173  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:56.435825  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:56.936702  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:57.436527  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:57.936442  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:58.436611  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:58.936591  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:59.436321  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:59.937837  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:00.436459  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:00.936639  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:01.437141  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:01.936951  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:02.436292  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:02.936804  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:03.437702  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:03.936237  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:04.436721  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:04.936104  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:05.439639  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:05.936149  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:06.433765  573699 kapi.go:81] temporary error: getting Pods with label selector "app.kubernetes.io/name=kubernetes-dashboard-web" : [client rate limiter Wait returned an error: context deadline exceeded]
	I1219 03:11:06.433806  573699 kapi.go:107] duration metric: took 6m0.001182154s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	W1219 03:11:06.433932  573699 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [waiting for app.kubernetes.io/name=kubernetes-dashboard-web pods: context deadline exceeded]
	I1219 03:11:06.435864  573699 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1219 03:11:06.437280  573699 addons.go:546] duration metric: took 6m7.672932083s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1219 03:11:06.437331  573699 start.go:247] waiting for cluster config update ...
	I1219 03:11:06.437348  573699 start.go:256] writing updated cluster config ...
	I1219 03:11:06.437666  573699 ssh_runner.go:195] Run: rm -f paused
	I1219 03:11:06.441973  573699 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:11:06.446110  573699 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-86vsf" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:06.450837  573699 pod_ready.go:94] pod "coredns-66bc5c9577-86vsf" is "Ready"
	I1219 03:11:06.450868  573699 pod_ready.go:86] duration metric: took 4.729554ms for pod "coredns-66bc5c9577-86vsf" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:06.453222  573699 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-103644" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:06.457430  573699 pod_ready.go:94] pod "etcd-default-k8s-diff-port-103644" is "Ready"
	I1219 03:11:06.457451  573699 pod_ready.go:86] duration metric: took 4.204892ms for pod "etcd-default-k8s-diff-port-103644" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:06.459510  573699 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-103644" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:06.463733  573699 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-103644" is "Ready"
	I1219 03:11:06.463756  573699 pod_ready.go:86] duration metric: took 4.230488ms for pod "kube-apiserver-default-k8s-diff-port-103644" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:06.465771  573699 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-103644" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:06.846433  573699 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-103644" is "Ready"
	I1219 03:11:06.846461  573699 pod_ready.go:86] duration metric: took 380.664307ms for pod "kube-controller-manager-default-k8s-diff-port-103644" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:07.046474  573699 pod_ready.go:83] waiting for pod "kube-proxy-lgw6f" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:07.446485  573699 pod_ready.go:94] pod "kube-proxy-lgw6f" is "Ready"
	I1219 03:11:07.446515  573699 pod_ready.go:86] duration metric: took 400.010893ms for pod "kube-proxy-lgw6f" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:07.647551  573699 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-103644" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:08.046807  573699 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-103644" is "Ready"
	I1219 03:11:08.046840  573699 pod_ready.go:86] duration metric: took 399.227778ms for pod "kube-scheduler-default-k8s-diff-port-103644" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:08.046853  573699 pod_ready.go:40] duration metric: took 1.604833632s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:11:08.095708  573699 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 03:11:08.097778  573699 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-103644" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                   ATTEMPT             POD ID              POD                                                     NAMESPACE
	bbf97a8974843       6e38f40d628db       14 minutes ago      Running             storage-provisioner                    2                   78e1c92e493dc       storage-provisioner                                     kube-system
	a617f907f5976       3a975970da2f5       14 minutes ago      Running             proxy                                  0                   42350c56c1ed2       kubernetes-dashboard-kong-9849c64bd-xp7zj               kubernetes-dashboard
	ba8583d499dab       3a975970da2f5       14 minutes ago      Exited              clear-stale-pid                        0                   42350c56c1ed2       kubernetes-dashboard-kong-9849c64bd-xp7zj               kubernetes-dashboard
	6d2c834ee3967       dd54374d0ab14       14 minutes ago      Running             kubernetes-dashboard-auth              0                   bbd45612817e1       kubernetes-dashboard-auth-557d9fbf7b-86ldt              kubernetes-dashboard
	b5a0ba5562bdd       d9cbc9f4053ca       15 minutes ago      Running             kubernetes-dashboard-metrics-scraper   0                   a41cf6e9e0d81       kubernetes-dashboard-metrics-scraper-7685fd8b77-9nkzr   kubernetes-dashboard
	88391b53389ad       4921d7a6dffa9       15 minutes ago      Running             kindnet-cni                            1                   0e15be7bc9aeb       kindnet-kzlhv                                           kube-system
	d98f6ee8737a4       52546a367cc9e       15 minutes ago      Running             coredns                                1                   d90c48f4e5290       coredns-66bc5c9577-qmb9z                                kube-system
	1ce22fc8ed5c2       56cc512116c8f       15 minutes ago      Running             busybox                                1                   3dfc57a649643       busybox                                                 default
	c081d39cdf580       6e38f40d628db       15 minutes ago      Exited              storage-provisioner                    1                   78e1c92e493dc       storage-provisioner                                     kube-system
	23500931d7544       36eef8e07bdd6       15 minutes ago      Running             kube-proxy                             1                   559f33ba97424       kube-proxy-qhlhx                                        kube-system
	2a322cf835b3b       aec12dadf56dd       15 minutes ago      Running             kube-scheduler                         1                   ce16640408b00       kube-scheduler-embed-certs-536489                       kube-system
	5cc5c75096006       a3e246e9556e9       15 minutes ago      Running             etcd                                   1                   cd1766a84546d       etcd-embed-certs-536489                                 kube-system
	ce0c57d49301f       5826b25d990d7       15 minutes ago      Running             kube-controller-manager                1                   c167fe9f101c1       kube-controller-manager-embed-certs-536489              kube-system
	adaab08cca65a       aa27095f56193       15 minutes ago      Running             kube-apiserver                         1                   fd4903f32446c       kube-apiserver-embed-certs-536489                       kube-system
	5e05b1748ea17       56cc512116c8f       15 minutes ago      Exited              busybox                                0                   638d78702ad99       busybox                                                 default
	49d80e3460629       52546a367cc9e       15 minutes ago      Exited              coredns                                0                   799f22c98e05d       coredns-66bc5c9577-qmb9z                                kube-system
	6ef3449d1c944       4921d7a6dffa9       15 minutes ago      Exited              kindnet-cni                            0                   4857662120796       kindnet-kzlhv                                           kube-system
	c5a983f195e2d       36eef8e07bdd6       15 minutes ago      Exited              kube-proxy                             0                   4a28fb48631fb       kube-proxy-qhlhx                                        kube-system
	694f1a505c59b       aec12dadf56dd       16 minutes ago      Exited              kube-scheduler                         0                   8c22c22f26aa9       kube-scheduler-embed-certs-536489                       kube-system
	950bbd91cdf6c       a3e246e9556e9       16 minutes ago      Exited              etcd                                   0                   2a04dd26f6f99       etcd-embed-certs-536489                                 kube-system
	297af3c7b709c       5826b25d990d7       16 minutes ago      Exited              kube-controller-manager                0                   da3910955a305       kube-controller-manager-embed-certs-536489              kube-system
	a468570a2a402       aa27095f56193       16 minutes ago      Exited              kube-apiserver                         0                   46c947f55e62d       kube-apiserver-embed-certs-536489                       kube-system
	
	
	==> containerd <==
	Dec 19 03:19:38 embed-certs-536489 containerd[454]: time="2025-12-19T03:19:38.827110124Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e13be395cb122823ccdc93143fde460.slice/cri-containerd-ce0c57d49301fbc43c6109ed0626ba83c88d5ea4449d0311222becbdd2ca7f9d.scope/hugetlb.1GB.events\""
	Dec 19 03:19:48 embed-certs-536489 containerd[454]: time="2025-12-19T03:19:48.843046260Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18df65878475717a15ee1c1c40abb4c3.slice/cri-containerd-5cc5c7509600612720d7150e46d17f2dc74595a122185f7fb165025fc8591db9.scope/hugetlb.2MB.events\""
	Dec 19 03:19:48 embed-certs-536489 containerd[454]: time="2025-12-19T03:19:48.843195192Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18df65878475717a15ee1c1c40abb4c3.slice/cri-containerd-5cc5c7509600612720d7150e46d17f2dc74595a122185f7fb165025fc8591db9.scope/hugetlb.1GB.events\""
	Dec 19 03:19:48 embed-certs-536489 containerd[454]: time="2025-12-19T03:19:48.844083124Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79b035a8_ae4b_4a1f_8458_4fe0f7d4ebef.slice/cri-containerd-1ce22fc8ed5c244e3041371fb5e196f2c8c0390b5ef45df170040970b4b6679d.scope/hugetlb.2MB.events\""
	Dec 19 03:19:48 embed-certs-536489 containerd[454]: time="2025-12-19T03:19:48.844200925Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79b035a8_ae4b_4a1f_8458_4fe0f7d4ebef.slice/cri-containerd-1ce22fc8ed5c244e3041371fb5e196f2c8c0390b5ef45df170040970b4b6679d.scope/hugetlb.1GB.events\""
	Dec 19 03:19:48 embed-certs-536489 containerd[454]: time="2025-12-19T03:19:48.844897618Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd0dceb8_d48d_4215_82f5_df001a8ffe5f.slice/cri-containerd-d98f6ee8737a4e1a9384d4dd7481c98610b0e878fad1eb0d13f725c032eb8a18.scope/hugetlb.2MB.events\""
	Dec 19 03:19:48 embed-certs-536489 containerd[454]: time="2025-12-19T03:19:48.844984498Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd0dceb8_d48d_4215_82f5_df001a8ffe5f.slice/cri-containerd-d98f6ee8737a4e1a9384d4dd7481c98610b0e878fad1eb0d13f725c032eb8a18.scope/hugetlb.1GB.events\""
	Dec 19 03:19:48 embed-certs-536489 containerd[454]: time="2025-12-19T03:19:48.845991377Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf75eb7b001775228e6f7e4cc959d9647.slice/cri-containerd-adaab08cca65aed7fec4b0bd60b2a396ef69b8752557464785ac247047a4b62d.scope/hugetlb.2MB.events\""
	Dec 19 03:19:48 embed-certs-536489 containerd[454]: time="2025-12-19T03:19:48.846149116Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf75eb7b001775228e6f7e4cc959d9647.slice/cri-containerd-adaab08cca65aed7fec4b0bd60b2a396ef69b8752557464785ac247047a4b62d.scope/hugetlb.1GB.events\""
	Dec 19 03:19:48 embed-certs-536489 containerd[454]: time="2025-12-19T03:19:48.846998171Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-pod2a4d0c65_8aff_4b2f_bb3d_d79b89f560ca.slice/cri-containerd-88391b53389ad6316dd5f150ef995160d26338b062497d4ce692afa61fc149e0.scope/hugetlb.2MB.events\""
	Dec 19 03:19:48 embed-certs-536489 containerd[454]: time="2025-12-19T03:19:48.847104580Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-pod2a4d0c65_8aff_4b2f_bb3d_d79b89f560ca.slice/cri-containerd-88391b53389ad6316dd5f150ef995160d26338b062497d4ce692afa61fc149e0.scope/hugetlb.1GB.events\""
	Dec 19 03:19:48 embed-certs-536489 containerd[454]: time="2025-12-19T03:19:48.847825371Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ab8b054_e2df_4a75_92bf_a7df63248b7a.slice/cri-containerd-b5a0ba5562bdd756f5240f2406266343eb667e23534e1a7d78549343008ecfaf.scope/hugetlb.2MB.events\""
	Dec 19 03:19:48 embed-certs-536489 containerd[454]: time="2025-12-19T03:19:48.847927440Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ab8b054_e2df_4a75_92bf_a7df63248b7a.slice/cri-containerd-b5a0ba5562bdd756f5240f2406266343eb667e23534e1a7d78549343008ecfaf.scope/hugetlb.1GB.events\""
	Dec 19 03:19:48 embed-certs-536489 containerd[454]: time="2025-12-19T03:19:48.848637642Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0be7aed4_50f4_44e9_a4bb_985684a728ad.slice/cri-containerd-6d2c834ee3967fcf1a232292c437f189b9a9603e0cf2d152b90241728f99395b.scope/hugetlb.2MB.events\""
	Dec 19 03:19:48 embed-certs-536489 containerd[454]: time="2025-12-19T03:19:48.848762293Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0be7aed4_50f4_44e9_a4bb_985684a728ad.slice/cri-containerd-6d2c834ee3967fcf1a232292c437f189b9a9603e0cf2d152b90241728f99395b.scope/hugetlb.1GB.events\""
	Dec 19 03:19:48 embed-certs-536489 containerd[454]: time="2025-12-19T03:19:48.849397551Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d84d601_10a6_4610_8b50_1794a35691db.slice/cri-containerd-a617f907f59764ef3a36004353cc0c58bffeab607edebad8cd81a81cf8b6ff18.scope/hugetlb.2MB.events\""
	Dec 19 03:19:48 embed-certs-536489 containerd[454]: time="2025-12-19T03:19:48.849475667Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d84d601_10a6_4610_8b50_1794a35691db.slice/cri-containerd-a617f907f59764ef3a36004353cc0c58bffeab607edebad8cd81a81cf8b6ff18.scope/hugetlb.1GB.events\""
	Dec 19 03:19:48 embed-certs-536489 containerd[454]: time="2025-12-19T03:19:48.850218204Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e13be395cb122823ccdc93143fde460.slice/cri-containerd-ce0c57d49301fbc43c6109ed0626ba83c88d5ea4449d0311222becbdd2ca7f9d.scope/hugetlb.2MB.events\""
	Dec 19 03:19:48 embed-certs-536489 containerd[454]: time="2025-12-19T03:19:48.850304763Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e13be395cb122823ccdc93143fde460.slice/cri-containerd-ce0c57d49301fbc43c6109ed0626ba83c88d5ea4449d0311222becbdd2ca7f9d.scope/hugetlb.1GB.events\""
	Dec 19 03:19:48 embed-certs-536489 containerd[454]: time="2025-12-19T03:19:48.850951111Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc7f26c2_aed8_4540_bd1f_0ee0b1974137.slice/cri-containerd-23500931d75449b8160f4e7f201ba70585ca8e2905c5d317c448baea0681d8a3.scope/hugetlb.2MB.events\""
	Dec 19 03:19:48 embed-certs-536489 containerd[454]: time="2025-12-19T03:19:48.851029217Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc7f26c2_aed8_4540_bd1f_0ee0b1974137.slice/cri-containerd-23500931d75449b8160f4e7f201ba70585ca8e2905c5d317c448baea0681d8a3.scope/hugetlb.1GB.events\""
	Dec 19 03:19:48 embed-certs-536489 containerd[454]: time="2025-12-19T03:19:48.851684377Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51c90b41_88a3_4279_84d8_13a52b7ef246.slice/cri-containerd-bbf97a8974843a36509fb3ed1c0f5f2bf65466a551be783aacb684ee93acde81.scope/hugetlb.2MB.events\""
	Dec 19 03:19:48 embed-certs-536489 containerd[454]: time="2025-12-19T03:19:48.851770157Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51c90b41_88a3_4279_84d8_13a52b7ef246.slice/cri-containerd-bbf97a8974843a36509fb3ed1c0f5f2bf65466a551be783aacb684ee93acde81.scope/hugetlb.1GB.events\""
	Dec 19 03:19:48 embed-certs-536489 containerd[454]: time="2025-12-19T03:19:48.852567647Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17c0a13ba859c4767a18964a450bf36c.slice/cri-containerd-2a322cf835b3b23b782536fe76f4ad8a12410fd3403278fc6b474579b71645a9.scope/hugetlb.2MB.events\""
	Dec 19 03:19:48 embed-certs-536489 containerd[454]: time="2025-12-19T03:19:48.852727057Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17c0a13ba859c4767a18964a450bf36c.slice/cri-containerd-2a322cf835b3b23b782536fe76f4ad8a12410fd3403278fc6b474579b71645a9.scope/hugetlb.1GB.events\""
	
	
	==> coredns [49d80e3460629230bed0177dda30379e478025d61a9337de8415aced6692f0c5] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41424 - 61485 "HINFO IN 1045963530138923230.8580688753000702100. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.052006531s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d98f6ee8737a4e1a9384d4dd7481c98610b0e878fad1eb0d13f725c032eb8a18] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39202 - 63367 "HINFO IN 6663590121657938747.4186564428347586509. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.025882834s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-536489
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-536489
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=embed-certs-536489
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T03_03_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 03:03:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-536489
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 03:19:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 03:19:36 +0000   Fri, 19 Dec 2025 03:03:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 03:19:36 +0000   Fri, 19 Dec 2025 03:03:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 03:19:36 +0000   Fri, 19 Dec 2025 03:03:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 03:19:36 +0000   Fri, 19 Dec 2025 03:04:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-536489
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                1ec4ce9e-9f47-460c-8f5f-4dbae0818e5d
	  Boot ID:                    a0dec9bb-d63c-4dc5-9036-bbcaf9f2c6be
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.2.0
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 coredns-66bc5c9577-qmb9z                                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     15m
	  kube-system                 etcd-embed-certs-536489                                  100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         16m
	  kube-system                 kindnet-kzlhv                                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      15m
	  kube-system                 kube-apiserver-embed-certs-536489                        250m (3%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-embed-certs-536489               200m (2%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-qhlhx                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-embed-certs-536489                        100m (1%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 metrics-server-746fcd58dc-8458x                          100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         15m
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kubernetes-dashboard        kubernetes-dashboard-api-f5b56d7b9-zkjk8                 100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     15m
	  kubernetes-dashboard        kubernetes-dashboard-auth-557d9fbf7b-86ldt               100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     15m
	  kubernetes-dashboard        kubernetes-dashboard-kong-9849c64bd-xp7zj                0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-7685fd8b77-9nkzr    100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     15m
	  kubernetes-dashboard        kubernetes-dashboard-web-5c9f966b98-x8z8r                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (16%)  1100m (13%)
	  memory             1220Mi (3%)  1820Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  Starting                 15m                kube-proxy       
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     16m                kubelet          Node embed-certs-536489 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node embed-certs-536489 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node embed-certs-536489 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15m                node-controller  Node embed-certs-536489 event: Registered Node embed-certs-536489 in Controller
	  Normal  NodeReady                15m                kubelet          Node embed-certs-536489 status is now: NodeReady
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node embed-certs-536489 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node embed-certs-536489 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node embed-certs-536489 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15m                node-controller  Node embed-certs-536489 event: Registered Node embed-certs-536489 in Controller
	
	
	==> dmesg <==
	[Dec19 01:17] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001886] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.395482] i8042: Warning: Keylock active
	[  +0.012710] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497460] block sda: the capability attribute has been deprecated.
	[  +0.080392] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.020963] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.499240] kauditd_printk_skb: 47 callbacks suppressed
	[Dec19 03:03] overlayfs: failed to resolve '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/29/fs': -2
	
	
	==> etcd [5cc5c7509600612720d7150e46d17f2dc74595a122185f7fb165025fc8591db9] <==
	{"level":"warn","ts":"2025-12-19T03:04:48.898029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:48.910862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:48.915679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:48.925908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:48.934366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:48.993787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:52.874145Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:52.898623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:52.925283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:52.939330Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:52.966946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:53.018405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:53.030527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:53.047519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:53.060496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:53.089715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35188","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-19T03:05:07.803872Z","caller":"traceutil/trace.go:172","msg":"trace[1867073184] transaction","detail":"{read_only:false; response_revision:751; number_of_response:1; }","duration":"138.584763ms","start":"2025-12-19T03:05:07.665266Z","end":"2025-12-19T03:05:07.803850Z","steps":["trace[1867073184] 'process raft request'  (duration: 138.402929ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:05:07.995500Z","caller":"traceutil/trace.go:172","msg":"trace[27340533] transaction","detail":"{read_only:false; response_revision:752; number_of_response:1; }","duration":"185.133001ms","start":"2025-12-19T03:05:07.810344Z","end":"2025-12-19T03:05:07.995477Z","steps":["trace[27340533] 'process raft request'  (duration: 104.597246ms)","trace[27340533] 'compare'  (duration: 80.3333ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-19T03:05:07.995687Z","caller":"traceutil/trace.go:172","msg":"trace[843511351] transaction","detail":"{read_only:false; response_revision:753; number_of_response:1; }","duration":"185.056756ms","start":"2025-12-19T03:05:07.810614Z","end":"2025-12-19T03:05:07.995671Z","steps":["trace[843511351] 'process raft request'  (duration: 184.803425ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:14:48.371167Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1148}
	{"level":"info","ts":"2025-12-19T03:14:48.391919Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1148,"took":"20.269493ms","hash":742480838,"current-db-size-bytes":4325376,"current-db-size":"4.3 MB","current-db-size-in-use-bytes":1851392,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2025-12-19T03:14:48.392051Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":742480838,"revision":1148,"compact-revision":-1}
	{"level":"info","ts":"2025-12-19T03:19:48.375346Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1408}
	{"level":"info","ts":"2025-12-19T03:19:48.378040Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1408,"took":"2.28429ms","hash":877099031,"current-db-size-bytes":4325376,"current-db-size":"4.3 MB","current-db-size-in-use-bytes":2154496,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2025-12-19T03:19:48.378084Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":877099031,"revision":1408,"compact-revision":1148}
	
	
	==> etcd [950bbd91cdf6c6f6ba6481d495ec450f0338291700356cd321b1785e71f85ce9] <==
	{"level":"warn","ts":"2025-12-19T03:03:49.550065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:03:49.567028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:03:49.574515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:03:49.584857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:03:49.593473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:03:49.600846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:03:49.609207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:03:49.617661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:03:49.625168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:03:49.633090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:03:49.642740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:03:49.650275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:03:49.657708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:03:49.664694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:03:49.672975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:03:49.680435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:03:49.695654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:03:49.702627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:03:49.714083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:03:49.723066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:03:49.731127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:03:49.751813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:03:49.759831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:03:49.768680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:03:49.833814Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58604","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 03:19:57 up  2:02,  0 user,  load average: 0.91, 0.72, 3.90
	Linux embed-certs-536489 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6ef3449d1c94453317ecbd3daf843f6ee0146bff95f5ea5fb15561ddd656b76e] <==
	I1219 03:04:03.286512       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1219 03:04:03.286802       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1219 03:04:03.286961       1 main.go:148] setting mtu 1500 for CNI 
	I1219 03:04:03.286988       1 main.go:178] kindnetd IP family: "ipv4"
	I1219 03:04:03.287013       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-19T03:04:03Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1219 03:04:03.584219       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1219 03:04:03.584267       1 controller.go:381] "Waiting for informer caches to sync"
	I1219 03:04:03.584281       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1219 03:04:03.584476       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1219 03:04:04.084669       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1219 03:04:04.084709       1 metrics.go:72] Registering metrics
	I1219 03:04:04.084861       1 controller.go:711] "Syncing nftables rules"
	I1219 03:04:13.585471       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:04:13.585547       1 main.go:301] handling current node
	I1219 03:04:23.584909       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:04:23.584968       1 main.go:301] handling current node
	
	
	==> kindnet [88391b53389ad6316dd5f150ef995160d26338b062497d4ce692afa61fc149e0] <==
	I1219 03:17:52.264627       1 main.go:301] handling current node
	I1219 03:18:02.264462       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:18:02.264499       1 main.go:301] handling current node
	I1219 03:18:12.264124       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:18:12.264159       1 main.go:301] handling current node
	I1219 03:18:22.263922       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:18:22.263953       1 main.go:301] handling current node
	I1219 03:18:32.264188       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:18:32.264219       1 main.go:301] handling current node
	I1219 03:18:42.263895       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:18:42.263926       1 main.go:301] handling current node
	I1219 03:18:52.264430       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:18:52.264463       1 main.go:301] handling current node
	I1219 03:19:02.264322       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:19:02.264387       1 main.go:301] handling current node
	I1219 03:19:12.264331       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:19:12.264405       1 main.go:301] handling current node
	I1219 03:19:22.263575       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:19:22.263641       1 main.go:301] handling current node
	I1219 03:19:32.263539       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:19:32.263574       1 main.go:301] handling current node
	I1219 03:19:42.264296       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:19:42.264336       1 main.go:301] handling current node
	I1219 03:19:52.314753       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:19:52.314800       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a468570a2a402d6b3627a5f34baf561fb32f32353eae4b100a0e832cfda659f4] <==
	I1219 03:03:53.080830       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1219 03:03:53.119119       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1219 03:03:57.551348       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1219 03:03:57.556886       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1219 03:03:58.149856       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1219 03:03:58.349463       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1219 03:04:26.563944       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:36676: use of closed network connection
	I1219 03:04:27.245043       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1219 03:04:27.248999       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:04:27.249065       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1219 03:04:27.249119       1 handler_proxy.go:143] error resolving kube-system/metrics-server: service "metrics-server" not found
	I1219 03:04:27.321637       1 alloc.go:328] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.104.75.28"}
	W1219 03:04:27.328760       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:04:27.328816       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1219 03:04:27.334393       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:04:27.334442       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	
	
	==> kube-apiserver [adaab08cca65aed7fec4b0bd60b2a396ef69b8752557464785ac247047a4b62d] <==
	 > logger="UnhandledError"
	I1219 03:15:50.588265       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:17:50.586132       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:17:50.586210       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1219 03:17:50.586236       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:17:50.589333       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:17:50.589431       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1219 03:17:50.589450       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:19:49.589963       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:19:49.590060       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1219 03:19:50.591114       1 handler_proxy.go:99] no RequestInfo found in the context
	W1219 03:19:50.591156       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:19:50.591200       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1219 03:19:50.591231       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1219 03:19:50.591207       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1219 03:19:50.592418       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [297af3c7b709c7627a4e3547ff621bc84bc949e990ed9529d2de353cd97067ba] <==
	I1219 03:03:57.379340       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1219 03:03:57.394447       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1219 03:03:57.394515       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1219 03:03:57.394572       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 03:03:57.394616       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1219 03:03:57.394624       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1219 03:03:57.394934       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1219 03:03:57.395929       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1219 03:03:57.395994       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1219 03:03:57.396026       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1219 03:03:57.396084       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1219 03:03:57.396169       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1219 03:03:57.396328       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1219 03:03:57.396811       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1219 03:03:57.398567       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1219 03:03:57.399867       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1219 03:03:57.401002       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1219 03:03:57.402356       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 03:03:57.405538       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 03:03:57.407743       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1219 03:03:57.422148       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1219 03:03:57.441008       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 03:04:17.352290       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1219 03:04:27.407214       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:04:27.448854       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-controller-manager [ce0c57d49301fbc43c6109ed0626ba83c88d5ea4449d0311222becbdd2ca7f9d] <==
	I1219 03:13:54.341929       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:14:24.257489       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:14:24.350240       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:14:54.263075       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:14:54.358551       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:15:24.267726       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:15:24.366310       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:15:54.273756       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:15:54.374785       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:16:24.278279       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:16:24.382441       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:16:54.283941       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:16:54.390123       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:17:24.288837       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:17:24.398505       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:17:54.293666       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:17:54.405774       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:18:24.298177       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:18:24.413372       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:18:54.302708       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:18:54.421484       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:19:24.307734       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:19:24.428874       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:19:54.312424       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:19:54.435556       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [23500931d75449b8160f4e7f201ba70585ca8e2905c5d317c448baea0681d8a3] <==
	I1219 03:04:51.445944       1 server_linux.go:53] "Using iptables proxy"
	I1219 03:04:51.552010       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1219 03:04:51.653481       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1219 03:04:51.653875       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1219 03:04:51.653988       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 03:04:51.734710       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1219 03:04:51.734784       1 server_linux.go:132] "Using iptables Proxier"
	I1219 03:04:51.744382       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:04:51.745692       1 server.go:527] "Version info" version="v1.34.3"
	I1219 03:04:51.745799       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:04:51.748477       1 config.go:200] "Starting service config controller"
	I1219 03:04:51.748568       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:04:51.748780       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:04:51.748857       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:04:51.749372       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:04:51.750062       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:04:51.750837       1 config.go:309] "Starting node config controller"
	I1219 03:04:51.750861       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:04:51.750869       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 03:04:51.849778       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1219 03:04:51.850063       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 03:04:51.850193       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [c5a983f195e2d026daabd22368b51353c4fcb3399dbe02f13f628d6de0e5afbc] <==
	I1219 03:03:59.612758       1 server_linux.go:53] "Using iptables proxy"
	I1219 03:03:59.697569       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1219 03:03:59.798640       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1219 03:03:59.798691       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1219 03:03:59.798877       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 03:03:59.832040       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1219 03:03:59.832120       1 server_linux.go:132] "Using iptables Proxier"
	I1219 03:03:59.840790       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:03:59.841566       1 server.go:527] "Version info" version="v1.34.3"
	I1219 03:03:59.842147       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:03:59.845715       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:03:59.846440       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:03:59.846758       1 config.go:309] "Starting node config controller"
	I1219 03:03:59.849072       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:03:59.849083       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 03:03:59.847258       1 config.go:200] "Starting service config controller"
	I1219 03:03:59.849092       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:03:59.847253       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:03:59.849106       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:03:59.949210       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 03:03:59.949259       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1219 03:03:59.949270       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [2a322cf835b3b23b782536fe76f4ad8a12410fd3403278fc6b474579b71645a9] <==
	I1219 03:04:47.795634       1 serving.go:386] Generated self-signed cert in-memory
	W1219 03:04:49.493848       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1219 03:04:49.495162       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1219 03:04:49.495189       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1219 03:04:49.495211       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1219 03:04:49.578289       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1219 03:04:49.578331       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:04:49.600751       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:04:49.601169       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:04:49.602617       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1219 03:04:49.602945       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1219 03:04:49.702362       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [694f1a505c59b9aa3ec96e13e5fad88123c4ea35ca018ff6aa1259950243c9a4] <==
	E1219 03:03:50.412171       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1219 03:03:50.412533       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1219 03:03:50.412546       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1219 03:03:50.412314       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1219 03:03:50.412426       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1219 03:03:50.412415       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1219 03:03:50.412500       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1219 03:03:50.412246       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1219 03:03:50.412361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1219 03:03:50.412696       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1219 03:03:50.412781       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1219 03:03:50.412745       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1219 03:03:51.239323       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1219 03:03:51.282101       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1219 03:03:51.285301       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1219 03:03:51.295328       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1219 03:03:51.447520       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1219 03:03:51.508832       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1219 03:03:51.605064       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1219 03:03:51.614764       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1219 03:03:51.659622       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1219 03:03:51.672097       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1219 03:03:51.691733       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1219 03:03:51.698794       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1219 03:03:53.709139       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 19 03:18:04 embed-certs-536489 kubelet[592]: E1219 03:18:04.368611     592 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-api\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-api/manifests/sha256:2bd14c0ffee99d15fb1595644ebd1083ac32c5157c6e6fd8615b0f556a1390c2: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-api-f5b56d7b9-zkjk8" podUID="be638e12-bf07-4abc-af3d-e23927785cea"
	Dec 19 03:18:11 embed-certs-536489 kubelet[592]: E1219 03:18:11.367967     592 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-8458x" podUID="47114157-df98-40be-815f-7437499ca215"
	Dec 19 03:18:14 embed-certs-536489 kubelet[592]: E1219 03:18:14.367931     592 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-web\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-web/manifests/sha256:cc7c31bd2d8470e3590dcb20fe980769b43054b31a5c5c0da606e9add898d85d: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-x8z8r" podUID="d61e9896-5a56-490e-b97a-634b1c427ce2"
	Dec 19 03:18:18 embed-certs-536489 kubelet[592]: E1219 03:18:18.368565     592 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-api\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-api/manifests/sha256:2bd14c0ffee99d15fb1595644ebd1083ac32c5157c6e6fd8615b0f556a1390c2: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-api-f5b56d7b9-zkjk8" podUID="be638e12-bf07-4abc-af3d-e23927785cea"
	Dec 19 03:18:24 embed-certs-536489 kubelet[592]: E1219 03:18:24.368433     592 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-8458x" podUID="47114157-df98-40be-815f-7437499ca215"
	Dec 19 03:18:29 embed-certs-536489 kubelet[592]: E1219 03:18:29.368461     592 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-web\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-web/manifests/sha256:cc7c31bd2d8470e3590dcb20fe980769b43054b31a5c5c0da606e9add898d85d: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-x8z8r" podUID="d61e9896-5a56-490e-b97a-634b1c427ce2"
	Dec 19 03:18:32 embed-certs-536489 kubelet[592]: E1219 03:18:32.368138     592 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-api\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-api/manifests/sha256:2bd14c0ffee99d15fb1595644ebd1083ac32c5157c6e6fd8615b0f556a1390c2: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-api-f5b56d7b9-zkjk8" podUID="be638e12-bf07-4abc-af3d-e23927785cea"
	Dec 19 03:18:37 embed-certs-536489 kubelet[592]: E1219 03:18:37.368179     592 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-8458x" podUID="47114157-df98-40be-815f-7437499ca215"
	Dec 19 03:18:42 embed-certs-536489 kubelet[592]: E1219 03:18:42.367825     592 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-web\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-web/manifests/sha256:cc7c31bd2d8470e3590dcb20fe980769b43054b31a5c5c0da606e9add898d85d: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-x8z8r" podUID="d61e9896-5a56-490e-b97a-634b1c427ce2"
	Dec 19 03:18:45 embed-certs-536489 kubelet[592]: E1219 03:18:45.367651     592 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-api\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-api/manifests/sha256:2bd14c0ffee99d15fb1595644ebd1083ac32c5157c6e6fd8615b0f556a1390c2: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-api-f5b56d7b9-zkjk8" podUID="be638e12-bf07-4abc-af3d-e23927785cea"
	Dec 19 03:18:52 embed-certs-536489 kubelet[592]: E1219 03:18:52.368846     592 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-8458x" podUID="47114157-df98-40be-815f-7437499ca215"
	Dec 19 03:18:53 embed-certs-536489 kubelet[592]: E1219 03:18:53.368048     592 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-web\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-web/manifests/sha256:cc7c31bd2d8470e3590dcb20fe980769b43054b31a5c5c0da606e9add898d85d: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-x8z8r" podUID="d61e9896-5a56-490e-b97a-634b1c427ce2"
	Dec 19 03:18:56 embed-certs-536489 kubelet[592]: E1219 03:18:56.368764     592 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-api\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-api/manifests/sha256:2bd14c0ffee99d15fb1595644ebd1083ac32c5157c6e6fd8615b0f556a1390c2: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-api-f5b56d7b9-zkjk8" podUID="be638e12-bf07-4abc-af3d-e23927785cea"
	Dec 19 03:19:07 embed-certs-536489 kubelet[592]: E1219 03:19:07.368138     592 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-web\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-web/manifests/sha256:cc7c31bd2d8470e3590dcb20fe980769b43054b31a5c5c0da606e9add898d85d: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-x8z8r" podUID="d61e9896-5a56-490e-b97a-634b1c427ce2"
	Dec 19 03:19:07 embed-certs-536489 kubelet[592]: E1219 03:19:07.368159     592 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-8458x" podUID="47114157-df98-40be-815f-7437499ca215"
	Dec 19 03:19:07 embed-certs-536489 kubelet[592]: E1219 03:19:07.368165     592 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-api\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-api/manifests/sha256:2bd14c0ffee99d15fb1595644ebd1083ac32c5157c6e6fd8615b0f556a1390c2: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-api-f5b56d7b9-zkjk8" podUID="be638e12-bf07-4abc-af3d-e23927785cea"
	Dec 19 03:19:21 embed-certs-536489 kubelet[592]: E1219 03:19:21.368039     592 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-api\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-api/manifests/sha256:2bd14c0ffee99d15fb1595644ebd1083ac32c5157c6e6fd8615b0f556a1390c2: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-api-f5b56d7b9-zkjk8" podUID="be638e12-bf07-4abc-af3d-e23927785cea"
	Dec 19 03:19:21 embed-certs-536489 kubelet[592]: E1219 03:19:21.368045     592 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-8458x" podUID="47114157-df98-40be-815f-7437499ca215"
	Dec 19 03:19:22 embed-certs-536489 kubelet[592]: E1219 03:19:22.367746     592 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-web\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-web/manifests/sha256:cc7c31bd2d8470e3590dcb20fe980769b43054b31a5c5c0da606e9add898d85d: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-x8z8r" podUID="d61e9896-5a56-490e-b97a-634b1c427ce2"
	Dec 19 03:19:33 embed-certs-536489 kubelet[592]: E1219 03:19:33.367800     592 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-api\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-api/manifests/sha256:2bd14c0ffee99d15fb1595644ebd1083ac32c5157c6e6fd8615b0f556a1390c2: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-api-f5b56d7b9-zkjk8" podUID="be638e12-bf07-4abc-af3d-e23927785cea"
	Dec 19 03:19:33 embed-certs-536489 kubelet[592]: E1219 03:19:33.367941     592 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-web\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-web/manifests/sha256:cc7c31bd2d8470e3590dcb20fe980769b43054b31a5c5c0da606e9add898d85d: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-x8z8r" podUID="d61e9896-5a56-490e-b97a-634b1c427ce2"
	Dec 19 03:19:34 embed-certs-536489 kubelet[592]: E1219 03:19:34.368420     592 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-8458x" podUID="47114157-df98-40be-815f-7437499ca215"
	Dec 19 03:19:45 embed-certs-536489 kubelet[592]: E1219 03:19:45.367936     592 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-8458x" podUID="47114157-df98-40be-815f-7437499ca215"
	Dec 19 03:19:46 embed-certs-536489 kubelet[592]: E1219 03:19:46.367873     592 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-api\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-api/manifests/sha256:2bd14c0ffee99d15fb1595644ebd1083ac32c5157c6e6fd8615b0f556a1390c2: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-api-f5b56d7b9-zkjk8" podUID="be638e12-bf07-4abc-af3d-e23927785cea"
	Dec 19 03:19:47 embed-certs-536489 kubelet[592]: E1219 03:19:47.367809     592 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-web\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-web/manifests/sha256:cc7c31bd2d8470e3590dcb20fe980769b43054b31a5c5c0da606e9add898d85d: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-x8z8r" podUID="d61e9896-5a56-490e-b97a-634b1c427ce2"
	
	
	==> kubernetes-dashboard [6d2c834ee3967fcf1a232292c437f189b9a9603e0cf2d152b90241728f99395b] <==
	I1219 03:04:59.714371       1 main.go:34] "Starting Kubernetes Dashboard Auth" version="1.4.0"
	I1219 03:04:59.714452       1 init.go:49] Using in-cluster config
	I1219 03:04:59.714625       1 main.go:44] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> kubernetes-dashboard [b5a0ba5562bdd756f5240f2406266343eb667e23534e1a7d78549343008ecfaf] <==
	E1219 03:16:56.931524       1 main.go:114] Error scraping node metrics: the server is currently unable to handle the request (get nodes.metrics.k8s.io)
	E1219 03:17:56.933300       1 main.go:114] Error scraping node metrics: the server is currently unable to handle the request (get nodes.metrics.k8s.io)
	E1219 03:18:56.933654       1 main.go:114] Error scraping node metrics: the server is currently unable to handle the request (get nodes.metrics.k8s.io)
	E1219 03:19:56.933783       1 main.go:114] Error scraping node metrics: the server is currently unable to handle the request (get nodes.metrics.k8s.io)
	10.244.0.1 - - [19/Dec/2025:03:16:37 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:16:47 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:16:57 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:17:07 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:17:17 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:17:27 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:17:37 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:17:47 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:17:57 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:18:07 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:18:17 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:18:27 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:18:37 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:18:47 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:18:57 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:19:07 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:19:17 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:19:27 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:19:37 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:19:47 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:19:57 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	
	
	==> storage-provisioner [bbf97a8974843a36509fb3ed1c0f5f2bf65466a551be783aacb684ee93acde81] <==
	W1219 03:19:33.247860       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:35.251191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:35.255153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:37.258430       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:37.262227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:39.265137       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:39.269947       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:41.273528       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:41.277491       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:43.280917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:43.285252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:45.288087       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:45.292821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:47.296048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:47.299702       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:49.302775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:49.306653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:51.309866       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:51.313709       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:53.316940       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:53.321595       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:55.325278       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:55.329517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:57.333047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:57.338098       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c081d39cdf580a1c019cbb24b0e727aa96d7d5720457e45142ee1b70e3fa2ea9] <==
	I1219 03:04:51.367962       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1219 03:05:21.372851       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-536489 -n embed-certs-536489
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-536489 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: metrics-server-746fcd58dc-8458x kubernetes-dashboard-api-f5b56d7b9-zkjk8 kubernetes-dashboard-web-5c9f966b98-x8z8r
helpers_test.go:283: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context embed-certs-536489 describe pod metrics-server-746fcd58dc-8458x kubernetes-dashboard-api-f5b56d7b9-zkjk8 kubernetes-dashboard-web-5c9f966b98-x8z8r
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context embed-certs-536489 describe pod metrics-server-746fcd58dc-8458x kubernetes-dashboard-api-f5b56d7b9-zkjk8 kubernetes-dashboard-web-5c9f966b98-x8z8r: exit status 1 (59.974957ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-8458x" not found
	Error from server (NotFound): pods "kubernetes-dashboard-api-f5b56d7b9-zkjk8" not found
	Error from server (NotFound): pods "kubernetes-dashboard-web-5c9f966b98-x8z8r" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context embed-certs-536489 describe pod metrics-server-746fcd58dc-8458x kubernetes-dashboard-api-f5b56d7b9-zkjk8 kubernetes-dashboard-web-5c9f966b98-x8z8r: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-208281 -n no-preload-208281
start_stop_delete_test.go:272: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-12-19 03:20:00.351567938 +0000 UTC m=+3277.344692148
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-208281
helpers_test.go:244: (dbg) docker inspect no-preload-208281:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d56842232a2cb560a4581ebccdc2dd6214606b633f041f214d2117bbe9b28a48",
	        "Created": "2025-12-19T03:03:28.458126614Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 570153,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-19T03:04:42.842760214Z",
	            "FinishedAt": "2025-12-19T03:04:41.846888118Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/d56842232a2cb560a4581ebccdc2dd6214606b633f041f214d2117bbe9b28a48/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d56842232a2cb560a4581ebccdc2dd6214606b633f041f214d2117bbe9b28a48/hostname",
	        "HostsPath": "/var/lib/docker/containers/d56842232a2cb560a4581ebccdc2dd6214606b633f041f214d2117bbe9b28a48/hosts",
	        "LogPath": "/var/lib/docker/containers/d56842232a2cb560a4581ebccdc2dd6214606b633f041f214d2117bbe9b28a48/d56842232a2cb560a4581ebccdc2dd6214606b633f041f214d2117bbe9b28a48-json.log",
	        "Name": "/no-preload-208281",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-208281:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-208281",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d56842232a2cb560a4581ebccdc2dd6214606b633f041f214d2117bbe9b28a48",
	                "LowerDir": "/var/lib/docker/overlay2/621165c6a6dbe21224b134a5d17637f400f3317c9f7f38695edec03f7bfd9a68-init/diff:/var/lib/docker/overlay2/68e8325308c9e4650215fd35d4b00e1f54e6ac5929641a1bc8ed2d512448afbd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/621165c6a6dbe21224b134a5d17637f400f3317c9f7f38695edec03f7bfd9a68/merged",
	                "UpperDir": "/var/lib/docker/overlay2/621165c6a6dbe21224b134a5d17637f400f3317c9f7f38695edec03f7bfd9a68/diff",
	                "WorkDir": "/var/lib/docker/overlay2/621165c6a6dbe21224b134a5d17637f400f3317c9f7f38695edec03f7bfd9a68/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-208281",
	                "Source": "/var/lib/docker/volumes/no-preload-208281/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-208281",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-208281",
	                "name.minikube.sigs.k8s.io": "no-preload-208281",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "ffc7d31ca1ab2beeb959bdbebc852eb31e400242e6bcd8a5fef873fcf70249b3",
	            "SandboxKey": "/var/run/docker/netns/ffc7d31ca1ab",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-208281": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "37b3bc9d9fb4e4e4504c1f37f0b72e1a5a4d569ae13e2c5ab75bc3fa3aa89d9c",
	                    "EndpointID": "4745c5f264d9d95359957a2376ee9bf289d5f6ec578a858c6c46d3f5f33ee484",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "06:ee:da:be:6a:18",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-208281",
	                        "d56842232a2c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-208281 -n no-preload-208281
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-208281 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-208281 logs -n 25: (1.519395264s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ delete  │ -p cert-options-967008                                                                                                                                                                                                                              │ cert-options-967008          │ jenkins │ v1.37.0 │ 19 Dec 25 03:03 UTC │ 19 Dec 25 03:03 UTC │
	│ start   │ -p NoKubernetes-821572 --driver=docker  --container-runtime=containerd                                                                                                                                                                              │ NoKubernetes-821572          │ jenkins │ v1.37.0 │ 19 Dec 25 03:03 UTC │ 19 Dec 25 03:03 UTC │
	│ start   │ -p no-preload-208281 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-208281            │ jenkins │ v1.37.0 │ 19 Dec 25 03:03 UTC │ 19 Dec 25 03:04 UTC │
	│ delete  │ -p kubernetes-upgrade-340572                                                                                                                                                                                                                        │ kubernetes-upgrade-340572    │ jenkins │ v1.37.0 │ 19 Dec 25 03:03 UTC │ 19 Dec 25 03:03 UTC │
	│ start   │ -p embed-certs-536489 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                        │ embed-certs-536489           │ jenkins │ v1.37.0 │ 19 Dec 25 03:03 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p NoKubernetes-821572 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-821572          │ jenkins │ v1.37.0 │ 19 Dec 25 03:03 UTC │                     │
	│ delete  │ -p NoKubernetes-821572                                                                                                                                                                                                                              │ NoKubernetes-821572          │ jenkins │ v1.37.0 │ 19 Dec 25 03:03 UTC │ 19 Dec 25 03:03 UTC │
	│ delete  │ -p disable-driver-mounts-443690                                                                                                                                                                                                                     │ disable-driver-mounts-443690 │ jenkins │ v1.37.0 │ 19 Dec 25 03:03 UTC │ 19 Dec 25 03:03 UTC │
	│ start   │ -p default-k8s-diff-port-103644 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-103644 │ jenkins │ v1.37.0 │ 19 Dec 25 03:03 UTC │ 19 Dec 25 03:04 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-002036 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-002036       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ stop    │ -p old-k8s-version-002036 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-002036       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ addons  │ enable metrics-server -p embed-certs-536489 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ embed-certs-536489           │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ stop    │ -p embed-certs-536489 --alsologtostderr -v=3                                                                                                                                                                                                        │ embed-certs-536489           │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ addons  │ enable metrics-server -p no-preload-208281 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-208281            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ stop    │ -p no-preload-208281 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-208281            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-002036 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-002036       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p old-k8s-version-002036 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-002036       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-103644 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ default-k8s-diff-port-103644 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ stop    │ -p default-k8s-diff-port-103644 --alsologtostderr -v=3                                                                                                                                                                                              │ default-k8s-diff-port-103644 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ addons  │ enable dashboard -p embed-certs-536489 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-536489           │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p embed-certs-536489 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                        │ embed-certs-536489           │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:10 UTC │
	│ addons  │ enable dashboard -p no-preload-208281 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-208281            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p no-preload-208281 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-208281            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:10 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-103644 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ default-k8s-diff-port-103644 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p default-k8s-diff-port-103644 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-103644 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:11 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 03:04:50
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 03:04:50.472071  573699 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:04:50.472443  573699 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:04:50.472454  573699 out.go:374] Setting ErrFile to fd 2...
	I1219 03:04:50.472463  573699 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:04:50.473301  573699 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-253859/.minikube/bin
	I1219 03:04:50.474126  573699 out.go:368] Setting JSON to false
	I1219 03:04:50.476304  573699 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6429,"bootTime":1766107061,"procs":363,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 03:04:50.476440  573699 start.go:143] virtualization: kvm guest
	I1219 03:04:50.478144  573699 out.go:179] * [default-k8s-diff-port-103644] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 03:04:50.479945  573699 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 03:04:50.480003  573699 notify.go:221] Checking for updates...
	I1219 03:04:50.482332  573699 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 03:04:50.483901  573699 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-253859/kubeconfig
	I1219 03:04:50.485635  573699 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-253859/.minikube
	I1219 03:04:50.489602  573699 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 03:04:50.493460  573699 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 03:04:48.691145  569947 cli_runner.go:164] Run: docker network inspect no-preload-208281 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1219 03:04:48.711282  569947 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1219 03:04:48.716221  569947 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:04:48.729144  569947 kubeadm.go:884] updating cluster {Name:no-preload-208281 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-208281 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSi
ze:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 03:04:48.729324  569947 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1219 03:04:48.729375  569947 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:04:48.763109  569947 containerd.go:627] all images are preloaded for containerd runtime.
	I1219 03:04:48.763136  569947 cache_images.go:86] Images are preloaded, skipping loading
	I1219 03:04:48.763146  569947 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1219 03:04:48.763264  569947 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-208281 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-208281 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 03:04:48.763347  569947 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1219 03:04:48.796269  569947 cni.go:84] Creating CNI manager for ""
	I1219 03:04:48.796300  569947 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1219 03:04:48.796329  569947 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1219 03:04:48.796369  569947 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-208281 NodeName:no-preload-208281 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 03:04:48.796558  569947 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-208281"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 03:04:48.796669  569947 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1219 03:04:48.808026  569947 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 03:04:48.808102  569947 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 03:04:48.819240  569947 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1219 03:04:48.836384  569947 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1219 03:04:48.852550  569947 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2234 bytes)
	I1219 03:04:48.869275  569947 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1219 03:04:48.873704  569947 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:04:48.886490  569947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:04:48.994443  569947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:04:49.020494  569947 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/no-preload-208281 for IP: 192.168.85.2
	I1219 03:04:49.020518  569947 certs.go:195] generating shared ca certs ...
	I1219 03:04:49.020533  569947 certs.go:227] acquiring lock for ca certs: {Name:mk50e31410087b5c6cdb0986368a8c9100618403 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:04:49.020722  569947 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-253859/.minikube/ca.key
	I1219 03:04:49.020809  569947 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-253859/.minikube/proxy-client-ca.key
	I1219 03:04:49.020826  569947 certs.go:257] generating profile certs ...
	I1219 03:04:49.020975  569947 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/no-preload-208281/client.key
	I1219 03:04:49.021064  569947 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/no-preload-208281/apiserver.key.8f504093
	I1219 03:04:49.021159  569947 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/no-preload-208281/proxy-client.key
	I1219 03:04:49.021324  569947 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/257493.pem (1338 bytes)
	W1219 03:04:49.021373  569947 certs.go:480] ignoring /home/jenkins/minikube-integration/22230-253859/.minikube/certs/257493_empty.pem, impossibly tiny 0 bytes
	I1219 03:04:49.021389  569947 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca-key.pem (1679 bytes)
	I1219 03:04:49.021430  569947 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca.pem (1078 bytes)
	I1219 03:04:49.021457  569947 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/cert.pem (1123 bytes)
	I1219 03:04:49.021480  569947 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/key.pem (1675 bytes)
	I1219 03:04:49.021525  569947 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/files/etc/ssl/certs/2574932.pem (1708 bytes)
	I1219 03:04:49.022292  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 03:04:49.050958  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 03:04:49.072475  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 03:04:49.095867  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1219 03:04:49.124289  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/no-preload-208281/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1219 03:04:49.150664  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/no-preload-208281/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1219 03:04:49.188239  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/no-preload-208281/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 03:04:49.216791  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/no-preload-208281/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1219 03:04:49.242767  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 03:04:49.264732  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/certs/257493.pem --> /usr/share/ca-certificates/257493.pem (1338 bytes)
	I1219 03:04:49.286635  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/files/etc/ssl/certs/2574932.pem --> /usr/share/ca-certificates/2574932.pem (1708 bytes)
	I1219 03:04:49.313716  569947 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 03:04:49.329405  569947 ssh_runner.go:195] Run: openssl version
	I1219 03:04:49.337082  569947 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:04:49.347002  569947 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 03:04:49.355979  569947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:04:49.360975  569947 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:26 /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:04:49.361048  569947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:04:49.457547  569947 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 03:04:49.470846  569947 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/257493.pem
	I1219 03:04:49.484764  569947 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/257493.pem /etc/ssl/certs/257493.pem
	I1219 03:04:49.501564  569947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/257493.pem
	I1219 03:04:49.510435  569947 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 19 02:34 /usr/share/ca-certificates/257493.pem
	I1219 03:04:49.510523  569947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/257493.pem
	I1219 03:04:49.583657  569947 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1219 03:04:49.596341  569947 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2574932.pem
	I1219 03:04:49.615267  569947 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2574932.pem /etc/ssl/certs/2574932.pem
	I1219 03:04:49.637741  569947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2574932.pem
	I1219 03:04:49.651506  569947 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 19 02:34 /usr/share/ca-certificates/2574932.pem
	I1219 03:04:49.651606  569947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2574932.pem
	I1219 03:04:49.719393  569947 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1219 03:04:49.738446  569947 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 03:04:49.759885  569947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1219 03:04:49.839963  569947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1219 03:04:49.916940  569947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1219 03:04:49.984478  569947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1219 03:04:50.052790  569947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1219 03:04:50.213057  569947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1219 03:04:50.323267  569947 kubeadm.go:401] StartCluster: {Name:no-preload-208281 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-208281 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:04:50.323602  569947 cri.go:57] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1219 03:04:50.323919  569947 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:04:50.475134  569947 cri.go:92] found id: "cacf5e35e790ac46df5f1339f8e234ed8ffd764c954e85131b71bce9fc1293aa"
	I1219 03:04:50.475159  569947 cri.go:92] found id: "fe48441f2b926fe3ab51c21e04ab3ca395718e082d30ebb97fb87169ccd0e569"
	I1219 03:04:50.475166  569947 cri.go:92] found id: "e2b6de3f6ca9fb6a4cf59ba73f641b81403ff210fb542805c0ecfb406f229a1a"
	I1219 03:04:50.475171  569947 cri.go:92] found id: "496cc0b515ef5f8631302beefecc3380ae6563f4ed57ee1a286e52a06f30249c"
	I1219 03:04:50.475175  569947 cri.go:92] found id: "0cbaba368082a3f121bc09e60595d1ff592ec5796ecc2115579e6f149ade94d7"
	I1219 03:04:50.475180  569947 cri.go:92] found id: "1b139b90f72cc73cf0a391fb1b6dde88df245b3d92b6a686104996e14c38330c"
	I1219 03:04:50.475184  569947 cri.go:92] found id: "6bee3b8cfdfc0cfbd9e189118939b5349e2c5e27938c7584f8e1081b62329aa5"
	I1219 03:04:50.475506  569947 cri.go:92] found id: "6647bd08b2c7d47c6eead48e5e683e5121b40b05c0fe31a31cf2329b794cf45e"
	I1219 03:04:50.475532  569947 cri.go:92] found id: "0457ac1d0e6da6c1cc69c3583af9c20d14f2274fefa0497dcdce311fa1b7a1d9"
	I1219 03:04:50.475549  569947 cri.go:92] found id: "7dd5f1a15d9551e729b3a73b8dd9b36565eb7c597d451995076e1f6606444459"
	I1219 03:04:50.475553  569947 cri.go:92] found id: "06cb2742e807f0ab357f122a308cc5bad433f366487ddbbf67177cc3d6f74e2b"
	I1219 03:04:50.475557  569947 cri.go:92] found id: "ee999ba4f0b47eadf10730be5384ab5f3b45f01128186cb9ab42b8df5c0b7400"
	I1219 03:04:50.475562  569947 cri.go:92] found id: ""
	I1219 03:04:50.475632  569947 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1219 03:04:50.558499  569947 cri.go:119] JSON = [{"ociVersion":"1.2.1","id":"2379cbb88b4433650ade336b6320fe20a8e4907825957318999810a514165e6e","pid":805,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2379cbb88b4433650ade336b6320fe20a8e4907825957318999810a514165e6e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2379cbb88b4433650ade336b6320fe20a8e4907825957318999810a514165e6e/rootfs","created":"2025-12-19T03:04:49.720787385Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"2379cbb88b4433650ade336b6320fe20a8e4907825957318999810a514165e6e","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-no-preload-208281_355754afcd0ce2d7bab6c853c60e836c","io.kubernetes.cri.sandbox-memor
y":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-no-preload-208281","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"355754afcd0ce2d7bab6c853c60e836c"},"owner":"root"},{"ociVersion":"1.2.1","id":"38931718ec045783470e9a1d913a9a9c05077db3a90703385f4fb4a1667664f2","pid":857,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/38931718ec045783470e9a1d913a9a9c05077db3a90703385f4fb4a1667664f2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/38931718ec045783470e9a1d913a9a9c05077db3a90703385f4fb4a1667664f2/rootfs","created":"2025-12-19T03:04:49.778097457Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"38931718ec045783470e9a1d913a9a9c05077db3a90703385f4fb4a1667664f2","io.kubernetes.c
ri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-no-preload-208281_e43ae2e7891eaa1ff806e636f311fb81","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-no-preload-208281","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"e43ae2e7891eaa1ff806e636f311fb81"},"owner":"root"},{"ociVersion":"1.2.1","id":"46efefa83a3c7ef9fc0acf51455ccd0f9b6e6fce80a57e43de82b11915e2ee07","pid":838,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/46efefa83a3c7ef9fc0acf51455ccd0f9b6e6fce80a57e43de82b11915e2ee07","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/46efefa83a3c7ef9fc0acf51455ccd0f9b6e6fce80a57e43de82b11915e2ee07/rootfs","created":"2025-12-19T03:04:49.777265025Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kub
ernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"46efefa83a3c7ef9fc0acf51455ccd0f9b6e6fce80a57e43de82b11915e2ee07","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-no-preload-208281_80442131b1359e6657f2959b40f80467","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-no-preload-208281","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"80442131b1359e6657f2959b40f80467"},"owner":"root"},{"ociVersion":"1.2.1","id":"496cc0b515ef5f8631302beefecc3380ae6563f4ed57ee1a286e52a06f30249c","pid":902,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/496cc0b515ef5f8631302beefecc3380ae6563f4ed57ee1a286e52a06f30249c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/496cc0b515ef5f8631302beefecc3380ae6563f4ed57ee1a286e52a06f30249c/rootfs","created":"2025-12-19T03:04:49.944110218Z","annotations":{"io.kubernetes.cri.container-name":"kube-apis
erver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.35.0-rc.1","io.kubernetes.cri.sandbox-id":"2379cbb88b4433650ade336b6320fe20a8e4907825957318999810a514165e6e","io.kubernetes.cri.sandbox-name":"kube-apiserver-no-preload-208281","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"355754afcd0ce2d7bab6c853c60e836c"},"owner":"root"},{"ociVersion":"1.2.1","id":"7d8e57ec3badf4087436cc732a67ce490fa8a67cdaa8f122d22d1d2973dd4db3","pid":845,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7d8e57ec3badf4087436cc732a67ce490fa8a67cdaa8f122d22d1d2973dd4db3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7d8e57ec3badf4087436cc732a67ce490fa8a67cdaa8f122d22d1d2973dd4db3/rootfs","created":"2025-12-19T03:04:49.76636358Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-c
pu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"7d8e57ec3badf4087436cc732a67ce490fa8a67cdaa8f122d22d1d2973dd4db3","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-no-preload-208281_93a9992ff7a9c41e489b493737b5b488","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-no-preload-208281","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"93a9992ff7a9c41e489b493737b5b488"},"owner":"root"},{"ociVersion":"1.2.1","id":"cacf5e35e790ac46df5f1339f8e234ed8ffd764c954e85131b71bce9fc1293aa","pid":964,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cacf5e35e790ac46df5f1339f8e234ed8ffd764c954e85131b71bce9fc1293aa","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cacf5e35e790ac46df5f1339f8e234ed8ffd764c954e85131b71bce9fc1293aa/rootfs","created":"2025-12-19T03:04:50.065275653Z","annotations":{"io.kubernetes
.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.35.0-rc.1","io.kubernetes.cri.sandbox-id":"38931718ec045783470e9a1d913a9a9c05077db3a90703385f4fb4a1667664f2","io.kubernetes.cri.sandbox-name":"kube-scheduler-no-preload-208281","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"e43ae2e7891eaa1ff806e636f311fb81"},"owner":"root"},{"ociVersion":"1.2.1","id":"e2b6de3f6ca9fb6a4cf59ba73f641b81403ff210fb542805c0ecfb406f229a1a","pid":928,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e2b6de3f6ca9fb6a4cf59ba73f641b81403ff210fb542805c0ecfb406f229a1a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e2b6de3f6ca9fb6a4cf59ba73f641b81403ff210fb542805c0ecfb406f229a1a/rootfs","created":"2025-12-19T03:04:50.024946214Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-
name":"registry.k8s.io/etcd:3.6.6-0","io.kubernetes.cri.sandbox-id":"7d8e57ec3badf4087436cc732a67ce490fa8a67cdaa8f122d22d1d2973dd4db3","io.kubernetes.cri.sandbox-name":"etcd-no-preload-208281","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"93a9992ff7a9c41e489b493737b5b488"},"owner":"root"},{"ociVersion":"1.2.1","id":"fe48441f2b926fe3ab51c21e04ab3ca395718e082d30ebb97fb87169ccd0e569","pid":979,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe48441f2b926fe3ab51c21e04ab3ca395718e082d30ebb97fb87169ccd0e569","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe48441f2b926fe3ab51c21e04ab3ca395718e082d30ebb97fb87169ccd0e569/rootfs","created":"2025-12-19T03:04:50.153274168Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.35.0-rc.1","io.kubernetes.cri.sandbox-id":"46efefa83a3c7ef9fc0acf5
1455ccd0f9b6e6fce80a57e43de82b11915e2ee07","io.kubernetes.cri.sandbox-name":"kube-controller-manager-no-preload-208281","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"80442131b1359e6657f2959b40f80467"},"owner":"root"}]
	I1219 03:04:50.559253  569947 cri.go:129] list returned 8 containers
	I1219 03:04:50.559288  569947 cri.go:132] container: {ID:2379cbb88b4433650ade336b6320fe20a8e4907825957318999810a514165e6e Status:running}
	I1219 03:04:50.559310  569947 cri.go:134] skipping 2379cbb88b4433650ade336b6320fe20a8e4907825957318999810a514165e6e - not in ps
	I1219 03:04:50.559318  569947 cri.go:132] container: {ID:38931718ec045783470e9a1d913a9a9c05077db3a90703385f4fb4a1667664f2 Status:running}
	I1219 03:04:50.559326  569947 cri.go:134] skipping 38931718ec045783470e9a1d913a9a9c05077db3a90703385f4fb4a1667664f2 - not in ps
	I1219 03:04:50.559332  569947 cri.go:132] container: {ID:46efefa83a3c7ef9fc0acf51455ccd0f9b6e6fce80a57e43de82b11915e2ee07 Status:running}
	I1219 03:04:50.559338  569947 cri.go:134] skipping 46efefa83a3c7ef9fc0acf51455ccd0f9b6e6fce80a57e43de82b11915e2ee07 - not in ps
	I1219 03:04:50.559343  569947 cri.go:132] container: {ID:496cc0b515ef5f8631302beefecc3380ae6563f4ed57ee1a286e52a06f30249c Status:running}
	I1219 03:04:50.559363  569947 cri.go:138] skipping {496cc0b515ef5f8631302beefecc3380ae6563f4ed57ee1a286e52a06f30249c running}: state = "running", want "paused"
	I1219 03:04:50.559373  569947 cri.go:132] container: {ID:7d8e57ec3badf4087436cc732a67ce490fa8a67cdaa8f122d22d1d2973dd4db3 Status:running}
	I1219 03:04:50.559381  569947 cri.go:134] skipping 7d8e57ec3badf4087436cc732a67ce490fa8a67cdaa8f122d22d1d2973dd4db3 - not in ps
	I1219 03:04:50.559386  569947 cri.go:132] container: {ID:cacf5e35e790ac46df5f1339f8e234ed8ffd764c954e85131b71bce9fc1293aa Status:running}
	I1219 03:04:50.559393  569947 cri.go:138] skipping {cacf5e35e790ac46df5f1339f8e234ed8ffd764c954e85131b71bce9fc1293aa running}: state = "running", want "paused"
	I1219 03:04:50.559400  569947 cri.go:132] container: {ID:e2b6de3f6ca9fb6a4cf59ba73f641b81403ff210fb542805c0ecfb406f229a1a Status:running}
	I1219 03:04:50.559406  569947 cri.go:138] skipping {e2b6de3f6ca9fb6a4cf59ba73f641b81403ff210fb542805c0ecfb406f229a1a running}: state = "running", want "paused"
	I1219 03:04:50.559412  569947 cri.go:132] container: {ID:fe48441f2b926fe3ab51c21e04ab3ca395718e082d30ebb97fb87169ccd0e569 Status:running}
	I1219 03:04:50.559419  569947 cri.go:138] skipping {fe48441f2b926fe3ab51c21e04ab3ca395718e082d30ebb97fb87169ccd0e569 running}: state = "running", want "paused"
	I1219 03:04:50.559472  569947 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1219 03:04:50.576564  569947 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1219 03:04:50.576683  569947 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1219 03:04:50.576777  569947 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1219 03:04:50.600225  569947 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1219 03:04:50.601759  569947 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-208281" does not appear in /home/jenkins/minikube-integration/22230-253859/kubeconfig
	I1219 03:04:50.605721  569947 kubeconfig.go:62] /home/jenkins/minikube-integration/22230-253859/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-208281" cluster setting kubeconfig missing "no-preload-208281" context setting]
	I1219 03:04:50.610686  569947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-253859/kubeconfig: {Name:mk882428f840659847d4e22cffee2d7775067610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:04:50.613392  569947 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1219 03:04:50.647032  569947 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1219 03:04:50.647196  569947 kubeadm.go:602] duration metric: took 70.481994ms to restartPrimaryControlPlane
	I1219 03:04:50.647478  569947 kubeadm.go:403] duration metric: took 324.224528ms to StartCluster
	I1219 03:04:50.647573  569947 settings.go:142] acquiring lock: {Name:mkabb1ebf75b28a37c7b2b053110889b555ff453 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:04:50.647991  569947 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-253859/kubeconfig
	I1219 03:04:50.652215  569947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-253859/kubeconfig: {Name:mk882428f840659847d4e22cffee2d7775067610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:04:50.652837  569947 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:04:50.652966  569947 addons.go:70] Setting storage-provisioner=true in profile "no-preload-208281"
	I1219 03:04:50.652984  569947 addons.go:239] Setting addon storage-provisioner=true in "no-preload-208281"
	W1219 03:04:50.652993  569947 addons.go:248] addon storage-provisioner should already be in state true
	I1219 03:04:50.653027  569947 host.go:66] Checking if "no-preload-208281" exists ...
	I1219 03:04:50.653048  569947 config.go:182] Loaded profile config "no-preload-208281": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1219 03:04:50.653120  569947 addons.go:70] Setting default-storageclass=true in profile "no-preload-208281"
	I1219 03:04:50.653135  569947 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-208281"
	I1219 03:04:50.653460  569947 cli_runner.go:164] Run: docker container inspect no-preload-208281 --format={{.State.Status}}
	I1219 03:04:50.653534  569947 cli_runner.go:164] Run: docker container inspect no-preload-208281 --format={{.State.Status}}
	I1219 03:04:50.655588  569947 addons.go:70] Setting metrics-server=true in profile "no-preload-208281"
	I1219 03:04:50.655611  569947 addons.go:239] Setting addon metrics-server=true in "no-preload-208281"
	W1219 03:04:50.655621  569947 addons.go:248] addon metrics-server should already be in state true
	I1219 03:04:50.655656  569947 host.go:66] Checking if "no-preload-208281" exists ...
	I1219 03:04:50.656118  569947 cli_runner.go:164] Run: docker container inspect no-preload-208281 --format={{.State.Status}}
	I1219 03:04:50.656525  569947 addons.go:70] Setting dashboard=true in profile "no-preload-208281"
	I1219 03:04:50.656563  569947 addons.go:239] Setting addon dashboard=true in "no-preload-208281"
	W1219 03:04:50.656574  569947 addons.go:248] addon dashboard should already be in state true
	I1219 03:04:50.656622  569947 host.go:66] Checking if "no-preload-208281" exists ...
	I1219 03:04:50.657316  569947 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1219 03:04:50.657617  569947 cli_runner.go:164] Run: docker container inspect no-preload-208281 --format={{.State.Status}}
	I1219 03:04:50.660722  569947 out.go:179] * Verifying Kubernetes components...
	I1219 03:04:50.661854  569947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:04:50.707508  569947 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1219 03:04:50.708775  569947 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1219 03:04:50.708812  569947 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1219 03:04:50.708834  569947 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:04:50.495202  573699 config.go:182] Loaded profile config "default-k8s-diff-port-103644": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 03:04:50.495941  573699 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 03:04:50.539840  573699 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 03:04:50.540119  573699 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:04:50.710990  573699 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-19 03:04:50.671412726 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:04:50.711217  573699 docker.go:319] overlay module found
	I1219 03:04:50.713697  573699 out.go:179] * Using the docker driver based on existing profile
	I1219 03:04:50.714949  573699 start.go:309] selected driver: docker
	I1219 03:04:50.714970  573699 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-103644 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-103644 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:04:50.715089  573699 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 03:04:50.716020  573699 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:04:50.884011  573699 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-19 03:04:50.859280212 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:04:50.884478  573699 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:04:50.884531  573699 cni.go:84] Creating CNI manager for ""
	I1219 03:04:50.884789  573699 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1219 03:04:50.884940  573699 start.go:353] cluster config:
	{Name:default-k8s-diff-port-103644 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-103644 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:04:50.887403  573699 out.go:179] * Starting "default-k8s-diff-port-103644" primary control-plane node in "default-k8s-diff-port-103644" cluster
	I1219 03:04:50.888689  573699 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1219 03:04:50.889896  573699 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1219 03:04:50.891030  573699 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1219 03:04:50.891092  573699 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-253859/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-amd64.tar.lz4
	I1219 03:04:50.891106  573699 cache.go:65] Caching tarball of preloaded images
	I1219 03:04:50.891194  573699 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1219 03:04:50.891211  573699 preload.go:238] Found /home/jenkins/minikube-integration/22230-253859/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1219 03:04:50.891221  573699 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on containerd
	I1219 03:04:50.891356  573699 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/config.json ...
	I1219 03:04:50.932991  573699 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1219 03:04:50.933024  573699 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1219 03:04:50.933040  573699 cache.go:243] Successfully downloaded all kic artifacts
	I1219 03:04:50.933079  573699 start.go:360] acquireMachinesLock for default-k8s-diff-port-103644: {Name:mk39933c40de3c92aeeb6b9d20d3c90e6af0f1fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:04:50.933158  573699 start.go:364] duration metric: took 48.804µs to acquireMachinesLock for "default-k8s-diff-port-103644"
	I1219 03:04:50.933177  573699 start.go:96] Skipping create...Using existing machine configuration
	I1219 03:04:50.933183  573699 fix.go:54] fixHost starting: 
	I1219 03:04:50.933489  573699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103644 --format={{.State.Status}}
	I1219 03:04:50.973427  573699 fix.go:112] recreateIfNeeded on default-k8s-diff-port-103644: state=Stopped err=<nil>
	W1219 03:04:50.973619  573699 fix.go:138] unexpected machine state, will restart: <nil>
	I1219 03:04:50.748260  566718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (4.195228143s)
	I1219 03:04:50.748361  566718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:04:51.828106  566718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml: (1.079706419s)
	I1219 03:04:51.828277  566718 addons.go:500] Verifying addon dashboard=true in "old-k8s-version-002036"
	I1219 03:04:51.828773  566718 cli_runner.go:164] Run: docker container inspect old-k8s-version-002036 --format={{.State.Status}}
	I1219 03:04:51.856291  566718 out.go:179] * Verifying dashboard addon...
	I1219 03:04:50.708886  569947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208281
	I1219 03:04:50.709108  569947 addons.go:239] Setting addon default-storageclass=true in "no-preload-208281"
	W1219 03:04:50.709132  569947 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:04:50.709161  569947 host.go:66] Checking if "no-preload-208281" exists ...
	I1219 03:04:50.709725  569947 cli_runner.go:164] Run: docker container inspect no-preload-208281 --format={{.State.Status}}
	I1219 03:04:50.710101  569947 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:04:50.710123  569947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:04:50.710173  569947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208281
	I1219 03:04:50.716696  569947 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:04:50.716718  569947 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:04:50.716777  569947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208281
	I1219 03:04:50.770714  569947 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:04:50.770743  569947 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:04:50.770811  569947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208281
	I1219 03:04:50.772323  569947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/no-preload-208281/id_rsa Username:docker}
	I1219 03:04:50.774548  569947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/no-preload-208281/id_rsa Username:docker}
	I1219 03:04:50.782771  569947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/no-preload-208281/id_rsa Username:docker}
	I1219 03:04:50.818125  569947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/no-preload-208281/id_rsa Username:docker}
	I1219 03:04:50.922492  569947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:04:50.961986  569947 node_ready.go:35] waiting up to 6m0s for node "no-preload-208281" to be "Ready" ...
	I1219 03:04:50.964889  569947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:04:50.991305  569947 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1219 03:04:50.991337  569947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1219 03:04:50.997863  569947 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:04:51.029470  569947 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1219 03:04:51.029507  569947 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1219 03:04:51.077218  569947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:04:51.083520  569947 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:04:51.083552  569947 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1219 03:04:51.107276  569947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:04:52.474618  569947 node_ready.go:49] node "no-preload-208281" is "Ready"
	I1219 03:04:52.474662  569947 node_ready.go:38] duration metric: took 1.512481187s for node "no-preload-208281" to be "Ready" ...
	I1219 03:04:52.474682  569947 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:04:52.474743  569947 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:04:51.142743  568301 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.559306992s)
	I1219 03:04:51.142940  568301 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.499593696s)
	I1219 03:04:51.143060  568301 addons.go:500] Verifying addon metrics-server=true in "embed-certs-536489"
	I1219 03:04:51.143722  568301 cli_runner.go:164] Run: docker container inspect embed-certs-536489 --format={{.State.Status}}
	I1219 03:04:51.144038  568301 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.580066034s)
	I1219 03:04:52.990446  568301 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.445475643s)
	I1219 03:04:52.990490  568301 api_server.go:72] duration metric: took 5.685402741s to wait for apiserver process to appear ...
	I1219 03:04:52.990498  568301 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:04:52.990528  568301 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1219 03:04:52.992275  568301 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (4.373532841s)
	I1219 03:04:52.992364  568301 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:04:53.002104  568301 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1219 03:04:53.006331  568301 api_server.go:141] control plane version: v1.34.3
	I1219 03:04:53.006385  568301 api_server.go:131] duration metric: took 15.878835ms to wait for apiserver health ...
	I1219 03:04:53.006399  568301 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:04:53.016977  568301 system_pods.go:59] 9 kube-system pods found
	I1219 03:04:53.017141  568301 system_pods.go:61] "coredns-66bc5c9577-qmb9z" [dd0dceb8-d48d-4215-82f5-df001a8ffe5f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:04:53.017157  568301 system_pods.go:61] "etcd-embed-certs-536489" [b3cbe090-1470-477e-87da-d93ca2bf3394] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:04:53.017165  568301 system_pods.go:61] "kindnet-kzlhv" [2a4d0c65-8aff-4b2f-bb3d-d79b89f560ca] Running
	I1219 03:04:53.017184  568301 system_pods.go:61] "kube-apiserver-embed-certs-536489" [18c7bfaa-73a6-457a-9a58-05d2ffa0de1c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:04:53.017193  568301 system_pods.go:61] "kube-controller-manager-embed-certs-536489" [d657289e-8fd1-4ed3-94c2-194aa95545f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:04:53.017199  568301 system_pods.go:61] "kube-proxy-qhlhx" [bc7f26c2-aed8-4540-bd1f-0ee0b1974137] Running
	I1219 03:04:53.017212  568301 system_pods.go:61] "kube-scheduler-embed-certs-536489" [72b72681-cda6-48b6-9f43-9c9b125883b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:04:53.017219  568301 system_pods.go:61] "metrics-server-746fcd58dc-8458x" [47114157-df98-40be-815f-7437499ca215] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:04:53.017225  568301 system_pods.go:61] "storage-provisioner" [51c90b41-88a3-4279-84d8-13a52b7ef246] Running
	I1219 03:04:53.017233  568301 system_pods.go:74] duration metric: took 10.826754ms to wait for pod list to return data ...
	I1219 03:04:53.017244  568301 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:04:53.020879  568301 default_sa.go:45] found service account: "default"
	I1219 03:04:53.020911  568301 default_sa.go:55] duration metric: took 3.659738ms for default service account to be created ...
	I1219 03:04:53.020925  568301 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:04:53.118092  568301 system_pods.go:86] 9 kube-system pods found
	I1219 03:04:53.118237  568301 system_pods.go:89] "coredns-66bc5c9577-qmb9z" [dd0dceb8-d48d-4215-82f5-df001a8ffe5f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:04:53.118277  568301 system_pods.go:89] "etcd-embed-certs-536489" [b3cbe090-1470-477e-87da-d93ca2bf3394] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:04:53.118286  568301 system_pods.go:89] "kindnet-kzlhv" [2a4d0c65-8aff-4b2f-bb3d-d79b89f560ca] Running
	I1219 03:04:53.118334  568301 system_pods.go:89] "kube-apiserver-embed-certs-536489" [18c7bfaa-73a6-457a-9a58-05d2ffa0de1c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:04:53.118346  568301 system_pods.go:89] "kube-controller-manager-embed-certs-536489" [d657289e-8fd1-4ed3-94c2-194aa95545f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:04:53.118360  568301 system_pods.go:89] "kube-proxy-qhlhx" [bc7f26c2-aed8-4540-bd1f-0ee0b1974137] Running
	I1219 03:04:53.118368  568301 system_pods.go:89] "kube-scheduler-embed-certs-536489" [72b72681-cda6-48b6-9f43-9c9b125883b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:04:53.118508  568301 system_pods.go:89] "metrics-server-746fcd58dc-8458x" [47114157-df98-40be-815f-7437499ca215] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:04:53.118523  568301 system_pods.go:89] "storage-provisioner" [51c90b41-88a3-4279-84d8-13a52b7ef246] Running
	I1219 03:04:53.118535  568301 system_pods.go:126] duration metric: took 97.602528ms to wait for k8s-apps to be running ...
	I1219 03:04:53.118546  568301 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:04:53.118629  568301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:04:53.213539  568301 addons.go:500] Verifying addon dashboard=true in "embed-certs-536489"
	I1219 03:04:53.213985  568301 cli_runner.go:164] Run: docker container inspect embed-certs-536489 --format={{.State.Status}}
	I1219 03:04:53.214117  568301 system_svc.go:56] duration metric: took 95.561896ms WaitForService to wait for kubelet
	I1219 03:04:53.214162  568301 kubeadm.go:587] duration metric: took 5.909072172s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:04:53.214187  568301 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:04:53.220086  568301 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:04:53.220122  568301 node_conditions.go:123] node cpu capacity is 8
	I1219 03:04:53.220143  568301 node_conditions.go:105] duration metric: took 5.94983ms to run NodePressure ...
	I1219 03:04:53.220159  568301 start.go:242] waiting for startup goroutines ...
	I1219 03:04:53.239792  568301 out.go:179] * Verifying dashboard addon...
	I1219 03:04:51.859124  566718 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:04:51.862362  566718 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:04:53.241980  568301 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:04:53.245176  568301 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:04:53.747449  568301 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:04:53.747476  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:54.245867  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:54.747323  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:50.976005  573699 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-103644" ...
	I1219 03:04:50.976124  573699 cli_runner.go:164] Run: docker start default-k8s-diff-port-103644
	I1219 03:04:51.482862  573699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103644 --format={{.State.Status}}
	I1219 03:04:51.514418  573699 kic.go:430] container "default-k8s-diff-port-103644" state is running.
	I1219 03:04:51.515091  573699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-103644
	I1219 03:04:51.545304  573699 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/config.json ...
	I1219 03:04:51.545913  573699 machine.go:94] provisionDockerMachine start ...
	I1219 03:04:51.546012  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:51.578064  573699 main.go:144] libmachine: Using SSH client type: native
	I1219 03:04:51.578471  573699 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1219 03:04:51.578526  573699 main.go:144] libmachine: About to run SSH command:
	hostname
	I1219 03:04:51.580615  573699 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46348->127.0.0.1:33098: read: connection reset by peer
	I1219 03:04:54.740022  573699 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-103644
	
	I1219 03:04:54.740053  573699 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-103644"
	I1219 03:04:54.740121  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:54.764557  573699 main.go:144] libmachine: Using SSH client type: native
	I1219 03:04:54.764812  573699 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1219 03:04:54.764832  573699 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-103644 && echo "default-k8s-diff-port-103644" | sudo tee /etc/hostname
	I1219 03:04:54.940991  573699 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-103644
	
	I1219 03:04:54.941090  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:54.961163  573699 main.go:144] libmachine: Using SSH client type: native
	I1219 03:04:54.961447  573699 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1219 03:04:54.961472  573699 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-103644' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-103644/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-103644' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 03:04:55.112211  573699 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:04:55.112238  573699 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22230-253859/.minikube CaCertPath:/home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22230-253859/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22230-253859/.minikube}
	I1219 03:04:55.112272  573699 ubuntu.go:190] setting up certificates
	I1219 03:04:55.112285  573699 provision.go:84] configureAuth start
	I1219 03:04:55.112354  573699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-103644
	I1219 03:04:55.131633  573699 provision.go:143] copyHostCerts
	I1219 03:04:55.131701  573699 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-253859/.minikube/ca.pem, removing ...
	I1219 03:04:55.131722  573699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-253859/.minikube/ca.pem
	I1219 03:04:55.131814  573699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22230-253859/.minikube/ca.pem (1078 bytes)
	I1219 03:04:55.131992  573699 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-253859/.minikube/cert.pem, removing ...
	I1219 03:04:55.132009  573699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-253859/.minikube/cert.pem
	I1219 03:04:55.132066  573699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22230-253859/.minikube/cert.pem (1123 bytes)
	I1219 03:04:55.132178  573699 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-253859/.minikube/key.pem, removing ...
	I1219 03:04:55.132189  573699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-253859/.minikube/key.pem
	I1219 03:04:55.132230  573699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22230-253859/.minikube/key.pem (1675 bytes)
	I1219 03:04:55.132339  573699 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22230-253859/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-103644 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-103644 localhost minikube]
	I1219 03:04:55.201421  573699 provision.go:177] copyRemoteCerts
	I1219 03:04:55.201486  573699 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 03:04:55.201545  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:55.220254  573699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/default-k8s-diff-port-103644/id_rsa Username:docker}
	I1219 03:04:55.324809  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1219 03:04:55.344299  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1219 03:04:55.364633  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1219 03:04:55.383945  573699 provision.go:87] duration metric: took 271.644189ms to configureAuth
	I1219 03:04:55.383975  573699 ubuntu.go:206] setting minikube options for container-runtime
	I1219 03:04:55.384174  573699 config.go:182] Loaded profile config "default-k8s-diff-port-103644": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 03:04:55.384190  573699 machine.go:97] duration metric: took 3.838258422s to provisionDockerMachine
	I1219 03:04:55.384201  573699 start.go:293] postStartSetup for "default-k8s-diff-port-103644" (driver="docker")
	I1219 03:04:55.384218  573699 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 03:04:55.384292  573699 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 03:04:55.384363  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:55.402689  573699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/default-k8s-diff-port-103644/id_rsa Username:docker}
	I1219 03:04:55.509385  573699 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 03:04:55.513698  573699 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1219 03:04:55.513738  573699 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1219 03:04:55.513752  573699 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-253859/.minikube/addons for local assets ...
	I1219 03:04:55.513809  573699 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-253859/.minikube/files for local assets ...
	I1219 03:04:55.513923  573699 filesync.go:149] local asset: /home/jenkins/minikube-integration/22230-253859/.minikube/files/etc/ssl/certs/2574932.pem -> 2574932.pem in /etc/ssl/certs
	I1219 03:04:55.514061  573699 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 03:04:55.522610  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/files/etc/ssl/certs/2574932.pem --> /etc/ssl/certs/2574932.pem (1708 bytes)
	I1219 03:04:55.542136  573699 start.go:296] duration metric: took 157.911131ms for postStartSetup
	I1219 03:04:55.542235  573699 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 03:04:55.542278  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:55.560317  573699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/default-k8s-diff-port-103644/id_rsa Username:docker}
	I1219 03:04:55.676892  573699 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1219 03:04:55.683207  573699 fix.go:56] duration metric: took 4.75001221s for fixHost
	I1219 03:04:55.683240  573699 start.go:83] releasing machines lock for "default-k8s-diff-port-103644", held for 4.750073001s
	I1219 03:04:55.683337  573699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-103644
	I1219 03:04:55.706632  573699 ssh_runner.go:195] Run: cat /version.json
	I1219 03:04:55.706696  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:55.706708  573699 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 03:04:55.706796  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:55.729248  573699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/default-k8s-diff-port-103644/id_rsa Username:docker}
	I1219 03:04:55.729555  573699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/default-k8s-diff-port-103644/id_rsa Username:docker}
	I1219 03:04:55.832375  573699 ssh_runner.go:195] Run: systemctl --version
	I1219 03:04:55.888761  573699 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 03:04:55.894089  573699 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 03:04:55.894170  573699 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 03:04:55.902973  573699 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1219 03:04:55.903001  573699 start.go:496] detecting cgroup driver to use...
	I1219 03:04:55.903039  573699 detect.go:190] detected "systemd" cgroup driver on host os
	I1219 03:04:55.903123  573699 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1219 03:04:55.924413  573699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1219 03:04:55.939247  573699 docker.go:218] disabling cri-docker service (if available) ...
	I1219 03:04:55.939312  573699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1219 03:04:55.955848  573699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1219 03:04:55.970636  573699 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1219 03:04:56.060548  573699 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1219 03:04:56.151469  573699 docker.go:234] disabling docker service ...
	I1219 03:04:56.151544  573699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1219 03:04:56.168733  573699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1219 03:04:56.183785  573699 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1219 03:04:56.269923  573699 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1219 03:04:56.358410  573699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1219 03:04:56.374184  573699 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 03:04:56.391509  573699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1219 03:04:56.403885  573699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1219 03:04:56.418704  573699 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1219 03:04:56.418843  573699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1219 03:04:56.432502  573699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1219 03:04:56.446280  573699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1219 03:04:56.458732  573699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1219 03:04:56.471691  573699 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 03:04:56.482737  573699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1219 03:04:56.494667  573699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1219 03:04:56.507284  573699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1219 03:04:56.520174  573699 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 03:04:56.530768  573699 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 03:04:56.541170  573699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:04:56.646657  573699 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1219 03:04:56.781992  573699 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1219 03:04:56.782112  573699 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1219 03:04:56.788198  573699 start.go:564] Will wait 60s for crictl version
	I1219 03:04:56.788285  573699 ssh_runner.go:195] Run: which crictl
	I1219 03:04:56.793113  573699 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1219 03:04:56.836402  573699 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1219 03:04:56.836474  573699 ssh_runner.go:195] Run: containerd --version
	I1219 03:04:56.864133  573699 ssh_runner.go:195] Run: containerd --version
	I1219 03:04:56.898122  573699 out.go:179] * Preparing Kubernetes v1.34.3 on containerd 2.2.0 ...
	I1219 03:04:53.197683  569947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.23269288s)
	I1219 03:04:53.197756  569947 ssh_runner.go:235] Completed: test -f /usr/bin/helm: (2.199861038s)
	I1219 03:04:53.197848  569947 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:04:53.197862  569947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.120620602s)
	I1219 03:04:53.198058  569947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.09074876s)
	I1219 03:04:53.198096  569947 addons.go:500] Verifying addon metrics-server=true in "no-preload-208281"
	I1219 03:04:53.198179  569947 api_server.go:72] duration metric: took 2.540661776s to wait for apiserver process to appear ...
	I1219 03:04:53.198202  569947 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:04:53.198229  569947 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1219 03:04:53.198445  569947 cli_runner.go:164] Run: docker container inspect no-preload-208281 --format={{.State.Status}}
	I1219 03:04:53.205510  569947 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:04:53.205637  569947 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:04:53.205671  569947 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:04:53.698608  569947 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1219 03:04:53.705658  569947 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:04:53.705697  569947 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:04:54.198361  569947 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1219 03:04:54.202897  569947 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1219 03:04:54.204079  569947 api_server.go:141] control plane version: v1.35.0-rc.1
	I1219 03:04:54.204114  569947 api_server.go:131] duration metric: took 1.005903946s to wait for apiserver health ...
	I1219 03:04:54.204127  569947 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:04:54.208336  569947 system_pods.go:59] 9 kube-system pods found
	I1219 03:04:54.208377  569947 system_pods.go:61] "coredns-7d764666f9-hm5hz" [59441d91-a2b7-4d87-86d1-5ccaaec4e398] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:04:54.208389  569947 system_pods.go:61] "etcd-no-preload-208281" [edfe3a0f-95b1-49ee-8843-456255c2c573] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:04:54.208403  569947 system_pods.go:61] "kindnet-zbmbl" [e7d80d3e-7bf1-4e49-b7f9-c0911bbae20d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1219 03:04:54.208424  569947 system_pods.go:61] "kube-apiserver-no-preload-208281" [cee547f9-b6ae-4654-b92b-5cd3c5caae01] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:04:54.208437  569947 system_pods.go:61] "kube-controller-manager-no-preload-208281" [ed375fa0-c03b-42d4-9887-cbe64ed19aeb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:04:54.208444  569947 system_pods.go:61] "kube-proxy-xst8w" [24d16e46-3e1f-4d38-a486-8f15642946c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:04:54.208460  569947 system_pods.go:61] "kube-scheduler-no-preload-208281" [65c63f44-2615-47ca-9323-d80a812af086] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:04:54.208472  569947 system_pods.go:61] "metrics-server-5d785b57d4-zgcxz" [743fe6aa-308c-4f80-b7f5-c753be058b69] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:04:54.208477  569947 system_pods.go:61] "storage-provisioner" [5bab6e7d-150b-4c8e-ab0a-933ec046c863] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:04:54.208488  569947 system_pods.go:74] duration metric: took 4.352835ms to wait for pod list to return data ...
	I1219 03:04:54.208503  569947 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:04:54.211346  569947 default_sa.go:45] found service account: "default"
	I1219 03:04:54.211373  569947 default_sa.go:55] duration metric: took 2.86243ms for default service account to be created ...
	I1219 03:04:54.211385  569947 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:04:54.214301  569947 system_pods.go:86] 9 kube-system pods found
	I1219 03:04:54.214337  569947 system_pods.go:89] "coredns-7d764666f9-hm5hz" [59441d91-a2b7-4d87-86d1-5ccaaec4e398] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:04:54.214347  569947 system_pods.go:89] "etcd-no-preload-208281" [edfe3a0f-95b1-49ee-8843-456255c2c573] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:04:54.214360  569947 system_pods.go:89] "kindnet-zbmbl" [e7d80d3e-7bf1-4e49-b7f9-c0911bbae20d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1219 03:04:54.214369  569947 system_pods.go:89] "kube-apiserver-no-preload-208281" [cee547f9-b6ae-4654-b92b-5cd3c5caae01] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:04:54.214377  569947 system_pods.go:89] "kube-controller-manager-no-preload-208281" [ed375fa0-c03b-42d4-9887-cbe64ed19aeb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:04:54.214386  569947 system_pods.go:89] "kube-proxy-xst8w" [24d16e46-3e1f-4d38-a486-8f15642946c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:04:54.214402  569947 system_pods.go:89] "kube-scheduler-no-preload-208281" [65c63f44-2615-47ca-9323-d80a812af086] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:04:54.214411  569947 system_pods.go:89] "metrics-server-5d785b57d4-zgcxz" [743fe6aa-308c-4f80-b7f5-c753be058b69] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:04:54.214421  569947 system_pods.go:89] "storage-provisioner" [5bab6e7d-150b-4c8e-ab0a-933ec046c863] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:04:54.214431  569947 system_pods.go:126] duration metric: took 3.039478ms to wait for k8s-apps to be running ...
	I1219 03:04:54.214443  569947 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:04:54.214504  569947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:04:54.371132  569947 ssh_runner.go:235] Completed: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh": (1.165499888s)
	I1219 03:04:54.371186  569947 system_svc.go:56] duration metric: took 156.734958ms WaitForService to wait for kubelet
	I1219 03:04:54.371215  569947 kubeadm.go:587] duration metric: took 3.713723941s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:04:54.371244  569947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:04:54.371246  569947 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:04:54.374625  569947 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:04:54.374660  569947 node_conditions.go:123] node cpu capacity is 8
	I1219 03:04:54.374679  569947 node_conditions.go:105] duration metric: took 3.423654ms to run NodePressure ...
	I1219 03:04:54.374695  569947 start.go:242] waiting for startup goroutines ...
	I1219 03:04:57.635651  569947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.264367144s)
	I1219 03:04:57.635887  569947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:04:57.949184  569947 addons.go:500] Verifying addon dashboard=true in "no-preload-208281"
	I1219 03:04:57.949557  569947 cli_runner.go:164] Run: docker container inspect no-preload-208281 --format={{.State.Status}}
	I1219 03:04:57.976511  569947 out.go:179] * Verifying dashboard addon...
	I1219 03:04:56.899304  573699 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-103644 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1219 03:04:56.919626  573699 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1219 03:04:56.924517  573699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:04:56.937946  573699 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-103644 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-103644 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 03:04:56.938108  573699 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1219 03:04:56.938182  573699 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:04:56.968240  573699 containerd.go:627] all images are preloaded for containerd runtime.
	I1219 03:04:56.968267  573699 containerd.go:534] Images already preloaded, skipping extraction
	I1219 03:04:56.968327  573699 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:04:56.997359  573699 containerd.go:627] all images are preloaded for containerd runtime.
	I1219 03:04:56.997383  573699 cache_images.go:86] Images are preloaded, skipping loading
	I1219 03:04:56.997392  573699 kubeadm.go:935] updating node { 192.168.94.2 8444 v1.34.3 containerd true true} ...
	I1219 03:04:56.997515  573699 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-103644 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-103644 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 03:04:56.997591  573699 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1219 03:04:57.033726  573699 cni.go:84] Creating CNI manager for ""
	I1219 03:04:57.033760  573699 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1219 03:04:57.033788  573699 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1219 03:04:57.033818  573699 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-103644 NodeName:default-k8s-diff-port-103644 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 03:04:57.034013  573699 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-103644"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 03:04:57.034110  573699 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1219 03:04:57.054291  573699 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 03:04:57.054366  573699 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 03:04:57.069183  573699 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I1219 03:04:57.092986  573699 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1219 03:04:57.114537  573699 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2240 bytes)
	I1219 03:04:57.135768  573699 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1219 03:04:57.141830  573699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:04:57.157200  573699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:04:57.285296  573699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:04:57.321401  573699 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644 for IP: 192.168.94.2
	I1219 03:04:57.321425  573699 certs.go:195] generating shared ca certs ...
	I1219 03:04:57.321445  573699 certs.go:227] acquiring lock for ca certs: {Name:mk50e31410087b5c6cdb0986368a8c9100618403 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:04:57.321651  573699 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-253859/.minikube/ca.key
	I1219 03:04:57.321728  573699 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-253859/.minikube/proxy-client-ca.key
	I1219 03:04:57.321741  573699 certs.go:257] generating profile certs ...
	I1219 03:04:57.321895  573699 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/client.key
	I1219 03:04:57.321969  573699 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/apiserver.key.eac4724a
	I1219 03:04:57.322032  573699 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/proxy-client.key
	I1219 03:04:57.322452  573699 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/257493.pem (1338 bytes)
	W1219 03:04:57.322563  573699 certs.go:480] ignoring /home/jenkins/minikube-integration/22230-253859/.minikube/certs/257493_empty.pem, impossibly tiny 0 bytes
	I1219 03:04:57.322947  573699 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca-key.pem (1679 bytes)
	I1219 03:04:57.323038  573699 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca.pem (1078 bytes)
	I1219 03:04:57.323130  573699 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/cert.pem (1123 bytes)
	I1219 03:04:57.323212  573699 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/key.pem (1675 bytes)
	I1219 03:04:57.323310  573699 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/files/etc/ssl/certs/2574932.pem (1708 bytes)
	I1219 03:04:57.324261  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 03:04:57.367430  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 03:04:57.395772  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 03:04:57.447975  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1219 03:04:57.485724  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1219 03:04:57.550160  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1219 03:04:57.586359  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 03:04:57.650368  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1219 03:04:57.705528  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/certs/257493.pem --> /usr/share/ca-certificates/257493.pem (1338 bytes)
	I1219 03:04:57.753827  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/files/etc/ssl/certs/2574932.pem --> /usr/share/ca-certificates/2574932.pem (1708 bytes)
	I1219 03:04:57.796129  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 03:04:57.846633  573699 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 03:04:57.874041  573699 ssh_runner.go:195] Run: openssl version
	I1219 03:04:57.883186  573699 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:04:57.893276  573699 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 03:04:57.903322  573699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:04:57.908713  573699 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:26 /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:04:57.908788  573699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:04:57.959424  573699 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 03:04:57.975955  573699 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/257493.pem
	I1219 03:04:57.987406  573699 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/257493.pem /etc/ssl/certs/257493.pem
	I1219 03:04:57.999924  573699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/257493.pem
	I1219 03:04:58.007017  573699 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 19 02:34 /usr/share/ca-certificates/257493.pem
	I1219 03:04:58.007094  573699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/257493.pem
	I1219 03:04:58.066450  573699 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1219 03:04:58.084889  573699 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2574932.pem
	I1219 03:04:58.104839  573699 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2574932.pem /etc/ssl/certs/2574932.pem
	I1219 03:04:58.121039  573699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2574932.pem
	I1219 03:04:58.128831  573699 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 19 02:34 /usr/share/ca-certificates/2574932.pem
	I1219 03:04:58.128908  573699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2574932.pem
	I1219 03:04:58.238719  573699 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1219 03:04:58.257473  573699 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 03:04:58.269077  573699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1219 03:04:58.373050  573699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1219 03:04:58.472122  573699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1219 03:04:58.523474  573699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1219 03:04:58.567812  573699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1219 03:04:58.624150  573699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1219 03:04:58.663023  573699 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-103644 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-103644 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:04:58.663147  573699 cri.go:57] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1219 03:04:58.663225  573699 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:04:58.698055  573699 cri.go:92] found id: "19baa8a9717c29181666ffb3efdceea762fabc94b416863c5b53dc5a31a91c9c"
	I1219 03:04:58.698124  573699 cri.go:92] found id: "c2591b42ec56d4e729eb5050bae8a11599a1d848e2b407c779bd43019d38acd7"
	I1219 03:04:58.698150  573699 cri.go:92] found id: "a8858dc4fe6cf1222bb421499d0aaa734e7a15be6fd8f63ebd4a4f91de0515c1"
	I1219 03:04:58.698161  573699 cri.go:92] found id: "fc945986f0d8b51799666a6eb31fb812b90a17842cc95aa6d352ef48b62f0652"
	I1219 03:04:58.698166  573699 cri.go:92] found id: "36e5d694c8907189486901b0aad40fae056b856f62180e718acb50ce029ecd0d"
	I1219 03:04:58.698176  573699 cri.go:92] found id: "ed906de27de9c3783be2432f68b3e79b562b368da4fe5ddde333748fe58c2534"
	I1219 03:04:58.698180  573699 cri.go:92] found id: "72384f1ad49d78ec6df6f3d6d752884b9f63349eb0cf50bb752be26fdba3141d"
	I1219 03:04:58.698185  573699 cri.go:92] found id: "872846ec96d2d949d4a7f1a644718c7d0f80e0e28fcde0d7425648a2ffc89358"
	I1219 03:04:58.698189  573699 cri.go:92] found id: "dd57b66fad064bb42b78fb88e0736406dd552174022391793749a16c31b46525"
	I1219 03:04:58.698201  573699 cri.go:92] found id: "ee8c252f3d8f4aee54318214731e5386b3c089ad31c19108f2f01301f0698503"
	I1219 03:04:58.698208  573699 cri.go:92] found id: "069eca43bbcc0eb20a0e387dc92839a2dc811dd9acdf65cf2e9fe7389f32d3cd"
	I1219 03:04:58.698212  573699 cri.go:92] found id: "49ae9ae9664179746d1cc4f0b2904783aa2c4b1e268ce918bf8eb4eec3c61233"
	I1219 03:04:58.698216  573699 cri.go:92] found id: ""
	I1219 03:04:58.698271  573699 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1219 03:04:58.725948  573699 cri.go:119] JSON = [{"ociVersion":"1.2.1","id":"0074e46caf6e41a18e3d0f5443787c1bc06e1d75e37fd459fd30404491fd7537","pid":862,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0074e46caf6e41a18e3d0f5443787c1bc06e1d75e37fd459fd30404491fd7537","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0074e46caf6e41a18e3d0f5443787c1bc06e1d75e37fd459fd30404491fd7537/rootfs","created":"2025-12-19T03:04:58.065318041Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"0074e46caf6e41a18e3d0f5443787c1bc06e1d75e37fd459fd30404491fd7537","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-default-k8s-diff-port-103644_50f4d1ce4fca33a4531f882f5fb97a4e","io.kubernetes.cri.sa
ndbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-default-k8s-diff-port-103644","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"50f4d1ce4fca33a4531f882f5fb97a4e"},"owner":"root"},{"ociVersion":"1.2.1","id":"19baa8a9717c29181666ffb3efdceea762fabc94b416863c5b53dc5a31a91c9c","pid":981,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/19baa8a9717c29181666ffb3efdceea762fabc94b416863c5b53dc5a31a91c9c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/19baa8a9717c29181666ffb3efdceea762fabc94b416863c5b53dc5a31a91c9c/rootfs","created":"2025-12-19T03:04:58.375811399Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.34.3","io.kubernetes.cri.sandbox-id":"6f1ccb334eeac83f482c9566bcf07d1b7f0691e38f246bd1e2b18a839382f2be","io.kubernetes.cri.sandbox-name":"kube-controller-manager-
default-k8s-diff-port-103644","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"ac53bb8a0832eefbaa4a648be6aad901"},"owner":"root"},{"ociVersion":"1.2.1","id":"6f1ccb334eeac83f482c9566bcf07d1b7f0691e38f246bd1e2b18a839382f2be","pid":855,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f1ccb334eeac83f482c9566bcf07d1b7f0691e38f246bd1e2b18a839382f2be","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f1ccb334eeac83f482c9566bcf07d1b7f0691e38f246bd1e2b18a839382f2be/rootfs","created":"2025-12-19T03:04:58.067793692Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"6f1ccb334eeac83f482c9566bcf07d1b7f0691e38f246bd1e2b18a839382f2be","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube
-system_kube-controller-manager-default-k8s-diff-port-103644_ac53bb8a0832eefbaa4a648be6aad901","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-default-k8s-diff-port-103644","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"ac53bb8a0832eefbaa4a648be6aad901"},"owner":"root"},{"ociVersion":"1.2.1","id":"6fa84325f8005a8cda67fb08f0d9185af05913fd2ed0468866c0616c98bcca9f","pid":834,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6fa84325f8005a8cda67fb08f0d9185af05913fd2ed0468866c0616c98bcca9f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6fa84325f8005a8cda67fb08f0d9185af05913fd2ed0468866c0616c98bcca9f/rootfs","created":"2025-12-19T03:04:58.050783422Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernet
es.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"6fa84325f8005a8cda67fb08f0d9185af05913fd2ed0468866c0616c98bcca9f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-default-k8s-diff-port-103644_996cf4b38188d4b0d664648ad2102013","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-default-k8s-diff-port-103644","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"996cf4b38188d4b0d664648ad2102013"},"owner":"root"},{"ociVersion":"1.2.1","id":"84008f4f31b82c848a4941fccaee8d78c5d94eb07bb5b35b53ea6f10f5e460bc","pid":796,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/84008f4f31b82c848a4941fccaee8d78c5d94eb07bb5b35b53ea6f10f5e460bc","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/84008f4f31b82c848a4941fccaee8d78c5d94eb07bb5b35b53ea6f10f5e460bc/rootfs","created":"2025-12-19T03:04:58.031779484Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","
io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"84008f4f31b82c848a4941fccaee8d78c5d94eb07bb5b35b53ea6f10f5e460bc","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-default-k8s-diff-port-103644_4275d7c883d3f735b8de47264bc63415","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-default-k8s-diff-port-103644","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"4275d7c883d3f735b8de47264bc63415"},"owner":"root"},{"ociVersion":"1.2.1","id":"a8858dc4fe6cf1222bb421499d0aaa734e7a15be6fd8f63ebd4a4f91de0515c1","pid":951,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a8858dc4fe6cf1222bb421499d0aaa734e7a15be6fd8f63ebd4a4f91de0515c1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a8858dc4fe6cf1222bb4214
99d0aaa734e7a15be6fd8f63ebd4a4f91de0515c1/rootfs","created":"2025-12-19T03:04:58.294875595Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.34.3","io.kubernetes.cri.sandbox-id":"6fa84325f8005a8cda67fb08f0d9185af05913fd2ed0468866c0616c98bcca9f","io.kubernetes.cri.sandbox-name":"kube-apiserver-default-k8s-diff-port-103644","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"996cf4b38188d4b0d664648ad2102013"},"owner":"root"},{"ociVersion":"1.2.1","id":"c2591b42ec56d4e729eb5050bae8a11599a1d848e2b407c779bd43019d38acd7","pid":969,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c2591b42ec56d4e729eb5050bae8a11599a1d848e2b407c779bd43019d38acd7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c2591b42ec56d4e729eb5050bae8a11599a1d848e2b407c779bd43019d38acd7/rootfs","created":"2025-12-19T03:04:58.293243949Z","
annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.34.3","io.kubernetes.cri.sandbox-id":"0074e46caf6e41a18e3d0f5443787c1bc06e1d75e37fd459fd30404491fd7537","io.kubernetes.cri.sandbox-name":"kube-scheduler-default-k8s-diff-port-103644","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"50f4d1ce4fca33a4531f882f5fb97a4e"},"owner":"root"},{"ociVersion":"1.2.1","id":"fc945986f0d8b51799666a6eb31fb812b90a17842cc95aa6d352ef48b62f0652","pid":915,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fc945986f0d8b51799666a6eb31fb812b90a17842cc95aa6d352ef48b62f0652","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fc945986f0d8b51799666a6eb31fb812b90a17842cc95aa6d352ef48b62f0652/rootfs","created":"2025-12-19T03:04:58.225549561Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"co
ntainer","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.6.5-0","io.kubernetes.cri.sandbox-id":"84008f4f31b82c848a4941fccaee8d78c5d94eb07bb5b35b53ea6f10f5e460bc","io.kubernetes.cri.sandbox-name":"etcd-default-k8s-diff-port-103644","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"4275d7c883d3f735b8de47264bc63415"},"owner":"root"}]
	I1219 03:04:58.726160  573699 cri.go:129] list returned 8 containers
	I1219 03:04:58.726176  573699 cri.go:132] container: {ID:0074e46caf6e41a18e3d0f5443787c1bc06e1d75e37fd459fd30404491fd7537 Status:running}
	I1219 03:04:58.726215  573699 cri.go:134] skipping 0074e46caf6e41a18e3d0f5443787c1bc06e1d75e37fd459fd30404491fd7537 - not in ps
	I1219 03:04:58.726225  573699 cri.go:132] container: {ID:19baa8a9717c29181666ffb3efdceea762fabc94b416863c5b53dc5a31a91c9c Status:running}
	I1219 03:04:58.726238  573699 cri.go:138] skipping {19baa8a9717c29181666ffb3efdceea762fabc94b416863c5b53dc5a31a91c9c running}: state = "running", want "paused"
	I1219 03:04:58.726253  573699 cri.go:132] container: {ID:6f1ccb334eeac83f482c9566bcf07d1b7f0691e38f246bd1e2b18a839382f2be Status:running}
	I1219 03:04:58.726263  573699 cri.go:134] skipping 6f1ccb334eeac83f482c9566bcf07d1b7f0691e38f246bd1e2b18a839382f2be - not in ps
	I1219 03:04:58.726272  573699 cri.go:132] container: {ID:6fa84325f8005a8cda67fb08f0d9185af05913fd2ed0468866c0616c98bcca9f Status:running}
	I1219 03:04:58.726282  573699 cri.go:134] skipping 6fa84325f8005a8cda67fb08f0d9185af05913fd2ed0468866c0616c98bcca9f - not in ps
	I1219 03:04:58.726287  573699 cri.go:132] container: {ID:84008f4f31b82c848a4941fccaee8d78c5d94eb07bb5b35b53ea6f10f5e460bc Status:running}
	I1219 03:04:58.726296  573699 cri.go:134] skipping 84008f4f31b82c848a4941fccaee8d78c5d94eb07bb5b35b53ea6f10f5e460bc - not in ps
	I1219 03:04:58.726300  573699 cri.go:132] container: {ID:a8858dc4fe6cf1222bb421499d0aaa734e7a15be6fd8f63ebd4a4f91de0515c1 Status:running}
	I1219 03:04:58.726310  573699 cri.go:138] skipping {a8858dc4fe6cf1222bb421499d0aaa734e7a15be6fd8f63ebd4a4f91de0515c1 running}: state = "running", want "paused"
	I1219 03:04:58.726317  573699 cri.go:132] container: {ID:c2591b42ec56d4e729eb5050bae8a11599a1d848e2b407c779bd43019d38acd7 Status:running}
	I1219 03:04:58.726327  573699 cri.go:138] skipping {c2591b42ec56d4e729eb5050bae8a11599a1d848e2b407c779bd43019d38acd7 running}: state = "running", want "paused"
	I1219 03:04:58.726334  573699 cri.go:132] container: {ID:fc945986f0d8b51799666a6eb31fb812b90a17842cc95aa6d352ef48b62f0652 Status:running}
	I1219 03:04:58.726341  573699 cri.go:138] skipping {fc945986f0d8b51799666a6eb31fb812b90a17842cc95aa6d352ef48b62f0652 running}: state = "running", want "paused"
	I1219 03:04:58.726406  573699 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1219 03:04:58.736002  573699 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1219 03:04:58.736024  573699 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1219 03:04:58.736083  573699 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1219 03:04:58.745325  573699 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1219 03:04:58.746851  573699 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-103644" does not appear in /home/jenkins/minikube-integration/22230-253859/kubeconfig
	I1219 03:04:58.747840  573699 kubeconfig.go:62] /home/jenkins/minikube-integration/22230-253859/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-103644" cluster setting kubeconfig missing "default-k8s-diff-port-103644" context setting]
	I1219 03:04:58.749236  573699 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-253859/kubeconfig: {Name:mk882428f840659847d4e22cffee2d7775067610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:04:58.751783  573699 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1219 03:04:58.761185  573699 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1219 03:04:58.761233  573699 kubeadm.go:602] duration metric: took 25.202742ms to restartPrimaryControlPlane
	I1219 03:04:58.761245  573699 kubeadm.go:403] duration metric: took 98.23938ms to StartCluster
	I1219 03:04:58.761266  573699 settings.go:142] acquiring lock: {Name:mkabb1ebf75b28a37c7b2b053110889b555ff453 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:04:58.761344  573699 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-253859/kubeconfig
	I1219 03:04:58.763956  573699 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-253859/kubeconfig: {Name:mk882428f840659847d4e22cffee2d7775067610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:04:58.764278  573699 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1219 03:04:58.764352  573699 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:04:58.764458  573699 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-103644"
	I1219 03:04:58.764482  573699 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-103644"
	W1219 03:04:58.764491  573699 addons.go:248] addon storage-provisioner should already be in state true
	I1219 03:04:58.764498  573699 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-103644"
	I1219 03:04:58.764518  573699 config.go:182] Loaded profile config "default-k8s-diff-port-103644": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 03:04:58.764533  573699 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-103644"
	I1219 03:04:58.764530  573699 addons.go:70] Setting metrics-server=true in profile "default-k8s-diff-port-103644"
	I1219 03:04:58.764551  573699 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-103644"
	I1219 03:04:58.764557  573699 addons.go:239] Setting addon metrics-server=true in "default-k8s-diff-port-103644"
	I1219 03:04:58.764521  573699 host.go:66] Checking if "default-k8s-diff-port-103644" exists ...
	I1219 03:04:58.764523  573699 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-103644"
	W1219 03:04:58.764565  573699 addons.go:248] addon metrics-server should already be in state true
	I1219 03:04:58.764660  573699 host.go:66] Checking if "default-k8s-diff-port-103644" exists ...
	I1219 03:04:58.764898  573699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103644 --format={{.State.Status}}
	I1219 03:04:58.765067  573699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103644 --format={{.State.Status}}
	W1219 03:04:58.764563  573699 addons.go:248] addon dashboard should already be in state true
	I1219 03:04:58.765224  573699 host.go:66] Checking if "default-k8s-diff-port-103644" exists ...
	I1219 03:04:58.765244  573699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103644 --format={{.State.Status}}
	I1219 03:04:58.765778  573699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103644 --format={{.State.Status}}
	I1219 03:04:58.766439  573699 out.go:179] * Verifying Kubernetes components...
	I1219 03:04:58.769848  573699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:04:58.795158  573699 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-103644"
	W1219 03:04:58.795295  573699 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:04:58.795354  573699 host.go:66] Checking if "default-k8s-diff-port-103644" exists ...
	I1219 03:04:58.796260  573699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103644 --format={{.State.Status}}
	I1219 03:04:58.798810  573699 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1219 03:04:58.798816  573699 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:04:57.865290  566718 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:04:57.865322  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:58.373051  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:58.867408  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:59.364332  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:55.245497  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:55.746387  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:56.245217  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:56.749455  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:57.246279  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:57.748208  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:58.247627  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:58.745395  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:59.247400  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:59.747210  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:58.799225  573699 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:04:58.799247  573699 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:04:58.799304  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:58.799993  573699 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1219 03:04:58.800017  573699 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1219 03:04:58.800075  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:58.800356  573699 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:04:58.800371  573699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:04:58.800429  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:58.837919  573699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/default-k8s-diff-port-103644/id_rsa Username:docker}
	I1219 03:04:58.838753  573699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/default-k8s-diff-port-103644/id_rsa Username:docker}
	I1219 03:04:58.846681  573699 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:04:58.846725  573699 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:04:58.846799  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:58.869014  573699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/default-k8s-diff-port-103644/id_rsa Username:docker}
	I1219 03:04:58.891596  573699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/default-k8s-diff-port-103644/id_rsa Username:docker}
	I1219 03:04:58.990117  573699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:04:59.008626  573699 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:04:59.009409  573699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:04:59.016187  573699 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-103644" to be "Ready" ...
	I1219 03:04:59.016907  573699 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:04:59.044939  573699 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1219 03:04:59.044973  573699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1219 03:04:59.048120  573699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:04:59.087063  573699 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1219 03:04:59.087153  573699 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1219 03:04:59.114132  573699 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:04:59.114163  573699 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1219 03:04:59.144085  573699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:05:00.372562  573699 node_ready.go:49] node "default-k8s-diff-port-103644" is "Ready"
	I1219 03:05:00.372622  573699 node_ready.go:38] duration metric: took 1.356373278s for node "default-k8s-diff-port-103644" to be "Ready" ...
	I1219 03:05:00.372644  573699 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:05:00.372706  573699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:04:57.979521  569947 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:04:57.983495  569947 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:04:57.983523  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:58.489816  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:58.984080  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:59.484148  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:59.983915  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:00.484939  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:00.985080  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:01.486418  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:01.986557  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:02.484684  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:59.866115  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:00.365239  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:00.866184  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:01.366415  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:01.863549  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:02.364375  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:02.863998  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:03.363890  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:03.863749  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:04.382768  566718 kapi.go:107] duration metric: took 12.523639555s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	I1219 03:05:04.433515  566718 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-002036 addons enable metrics-server
	
	I1219 03:05:04.435631  566718 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1219 03:05:04.437408  566718 addons.go:546] duration metric: took 22.668379604s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I1219 03:05:04.437463  566718 start.go:247] waiting for cluster config update ...
	I1219 03:05:04.437482  566718 start.go:256] writing updated cluster config ...
	I1219 03:05:04.437853  566718 ssh_runner.go:195] Run: rm -f paused
	I1219 03:05:04.443668  566718 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:05:04.450779  566718 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-l88tx" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:00.248093  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:00.749216  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:01.247778  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:01.747890  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:02.245449  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:02.746684  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:03.247359  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:03.746557  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:04.245966  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:04.746278  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:01.448117  573699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.43867528s)
	I1219 03:05:01.448182  573699 ssh_runner.go:235] Completed: test -f /usr/local/bin/helm: (2.431240621s)
	I1219 03:05:01.448196  573699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.399991052s)
	I1219 03:05:01.448260  573699 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:05:01.448385  573699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.304270108s)
	I1219 03:05:01.448406  573699 addons.go:500] Verifying addon metrics-server=true in "default-k8s-diff-port-103644"
	I1219 03:05:01.448485  573699 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.075756393s)
	I1219 03:05:01.448520  573699 api_server.go:72] duration metric: took 2.684209271s to wait for apiserver process to appear ...
	I1219 03:05:01.448536  573699 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:05:01.448558  573699 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1219 03:05:01.448716  573699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103644 --format={{.State.Status}}
	I1219 03:05:01.458744  573699 api_server.go:279] https://192.168.94.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:05:01.458783  573699 api_server.go:103] status: https://192.168.94.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:05:01.950069  573699 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1219 03:05:01.959300  573699 api_server.go:279] https://192.168.94.2:8444/healthz returned 200:
	ok
	I1219 03:05:01.960703  573699 api_server.go:141] control plane version: v1.34.3
	I1219 03:05:01.960739  573699 api_server.go:131] duration metric: took 512.19419ms to wait for apiserver health ...
	I1219 03:05:01.960751  573699 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:05:01.965477  573699 system_pods.go:59] 9 kube-system pods found
	I1219 03:05:01.965544  573699 system_pods.go:61] "coredns-66bc5c9577-86vsf" [d2b924f3-ac71-431b-a3e6-f85f1e0b94e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:01.965560  573699 system_pods.go:61] "etcd-default-k8s-diff-port-103644" [ececfad7-09c9-4851-9fda-c468648a6e3f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:05:01.965595  573699 system_pods.go:61] "kindnet-vgs5z" [3a78062f-cab2-4e56-bc36-33ecf9505255] Running
	I1219 03:05:01.965611  573699 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-103644" [c5859d2c-4337-4b88-a46f-695c3ac4f9c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:05:01.965623  573699 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-103644" [57334df1-410d-4993-936c-c6cf1604c166] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:05:01.965631  573699 system_pods.go:61] "kube-proxy-lgw6f" [3b4461b1-0b30-427d-9e31-107cea049612] Running
	I1219 03:05:01.965640  573699 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-103644" [e44d65aa-7d39-4020-b9d6-4473f92a8f90] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:05:01.965653  573699 system_pods.go:61] "metrics-server-746fcd58dc-tctv8" [37ff7895-b382-407b-9032-56a428173579] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:05:01.965660  573699 system_pods.go:61] "storage-provisioner" [f12460c5-0196-4171-a44f-31b13af14f9f] Running
	I1219 03:05:01.965670  573699 system_pods.go:74] duration metric: took 4.91154ms to wait for pod list to return data ...
	I1219 03:05:01.965682  573699 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:05:01.969223  573699 default_sa.go:45] found service account: "default"
	I1219 03:05:01.969255  573699 default_sa.go:55] duration metric: took 3.563468ms for default service account to be created ...
	I1219 03:05:01.969269  573699 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:05:01.973647  573699 system_pods.go:86] 9 kube-system pods found
	I1219 03:05:01.973775  573699 system_pods.go:89] "coredns-66bc5c9577-86vsf" [d2b924f3-ac71-431b-a3e6-f85f1e0b94e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:01.973790  573699 system_pods.go:89] "etcd-default-k8s-diff-port-103644" [ececfad7-09c9-4851-9fda-c468648a6e3f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:05:01.973797  573699 system_pods.go:89] "kindnet-vgs5z" [3a78062f-cab2-4e56-bc36-33ecf9505255] Running
	I1219 03:05:01.973804  573699 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-103644" [c5859d2c-4337-4b88-a46f-695c3ac4f9c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:05:01.973810  573699 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-103644" [57334df1-410d-4993-936c-c6cf1604c166] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:05:01.973828  573699 system_pods.go:89] "kube-proxy-lgw6f" [3b4461b1-0b30-427d-9e31-107cea049612] Running
	I1219 03:05:01.973834  573699 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-103644" [e44d65aa-7d39-4020-b9d6-4473f92a8f90] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:05:01.973840  573699 system_pods.go:89] "metrics-server-746fcd58dc-tctv8" [37ff7895-b382-407b-9032-56a428173579] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:05:01.973843  573699 system_pods.go:89] "storage-provisioner" [f12460c5-0196-4171-a44f-31b13af14f9f] Running
	I1219 03:05:01.973852  573699 system_pods.go:126] duration metric: took 4.574679ms to wait for k8s-apps to be running ...
	I1219 03:05:01.973859  573699 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:05:01.973912  573699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:05:02.653061  573699 ssh_runner.go:235] Completed: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh": (1.204735295s)
	I1219 03:05:02.653137  573699 system_svc.go:56] duration metric: took 679.266214ms WaitForService to wait for kubelet
	I1219 03:05:02.653168  573699 kubeadm.go:587] duration metric: took 3.888855367s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:05:02.653197  573699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:05:02.653199  573699 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:05:02.656332  573699 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:05:02.656365  573699 node_conditions.go:123] node cpu capacity is 8
	I1219 03:05:02.656382  573699 node_conditions.go:105] duration metric: took 3.090983ms to run NodePressure ...
	I1219 03:05:02.656398  573699 start.go:242] waiting for startup goroutines ...
	I1219 03:05:05.900902  573699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.247656336s)
	I1219 03:05:05.901008  573699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:05:06.370072  573699 addons.go:500] Verifying addon dashboard=true in "default-k8s-diff-port-103644"
	I1219 03:05:06.370443  573699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103644 --format={{.State.Status}}
	I1219 03:05:06.413077  573699 out.go:179] * Verifying dashboard addon...
	I1219 03:05:02.984573  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:03.483377  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:03.983965  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:04.483784  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:04.983862  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:05.484412  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:05.985034  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:06.484458  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:06.983536  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:07.484546  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	W1219 03:05:06.463527  566718 pod_ready.go:104] pod "coredns-5dd5756b68-l88tx" is not "Ready", error: <nil>
	W1219 03:05:08.958366  566718 pod_ready.go:104] pod "coredns-5dd5756b68-l88tx" is not "Ready", error: <nil>
	I1219 03:05:05.245656  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:05.747655  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:06.245722  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:06.748049  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:07.245806  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:07.806712  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:08.317551  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:08.746359  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:09.246666  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:09.745789  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:06.432631  573699 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:05:06.442236  573699 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:05:06.442267  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:06.938273  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:07.436226  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:07.935844  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:08.437222  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:08.937396  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:09.436432  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:09.937420  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:10.436795  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:07.982775  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:08.484705  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:08.983943  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:09.483954  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:09.984850  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:10.484036  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:10.985868  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:11.484253  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:11.984283  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:12.483325  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	W1219 03:05:11.457419  566718 pod_ready.go:104] pod "coredns-5dd5756b68-l88tx" is not "Ready", error: <nil>
	W1219 03:05:13.957361  566718 pod_ready.go:104] pod "coredns-5dd5756b68-l88tx" is not "Ready", error: <nil>
	I1219 03:05:10.247114  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:10.746688  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:11.246179  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:11.747053  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:12.245687  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:12.745963  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:13.245905  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:13.745641  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:14.245181  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:14.746110  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:10.937352  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:11.436009  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:11.937001  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:12.437140  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:12.937021  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:13.436775  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:13.936272  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:14.435921  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:14.937045  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:15.436754  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:12.983838  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:13.483669  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:13.983389  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:14.483140  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:14.983678  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:15.483333  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:15.983426  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:16.483195  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:16.982683  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:17.483883  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	W1219 03:05:16.457830  566718 pod_ready.go:104] pod "coredns-5dd5756b68-l88tx" is not "Ready", error: <nil>
	W1219 03:05:18.956955  566718 pod_ready.go:104] pod "coredns-5dd5756b68-l88tx" is not "Ready", error: <nil>
	I1219 03:05:15.245238  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:15.746028  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:16.245738  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:16.746152  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:17.245944  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:17.745478  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:18.244810  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:18.745484  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:19.245267  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:19.747027  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:15.935367  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:16.437144  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:16.936697  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:17.436257  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:17.938151  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:18.436806  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:18.936368  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:19.436056  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:19.936823  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:20.436574  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:20.956728  566718 pod_ready.go:94] pod "coredns-5dd5756b68-l88tx" is "Ready"
	I1219 03:05:20.956755  566718 pod_ready.go:86] duration metric: took 16.505943894s for pod "coredns-5dd5756b68-l88tx" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:20.959784  566718 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-002036" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:20.964097  566718 pod_ready.go:94] pod "etcd-old-k8s-version-002036" is "Ready"
	I1219 03:05:20.964121  566718 pod_ready.go:86] duration metric: took 4.312579ms for pod "etcd-old-k8s-version-002036" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:20.967209  566718 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-002036" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:20.971311  566718 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-002036" is "Ready"
	I1219 03:05:20.971340  566718 pod_ready.go:86] duration metric: took 4.107095ms for pod "kube-apiserver-old-k8s-version-002036" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:20.974403  566718 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-002036" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:21.155192  566718 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-002036" is "Ready"
	I1219 03:05:21.155230  566718 pod_ready.go:86] duration metric: took 180.802142ms for pod "kube-controller-manager-old-k8s-version-002036" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:21.356374  566718 pod_ready.go:83] waiting for pod "kube-proxy-666m9" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:21.755068  566718 pod_ready.go:94] pod "kube-proxy-666m9" is "Ready"
	I1219 03:05:21.755101  566718 pod_ready.go:86] duration metric: took 398.695005ms for pod "kube-proxy-666m9" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:21.955309  566718 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-002036" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:22.355240  566718 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-002036" is "Ready"
	I1219 03:05:22.355268  566718 pod_ready.go:86] duration metric: took 399.930732ms for pod "kube-scheduler-old-k8s-version-002036" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:22.355280  566718 pod_ready.go:40] duration metric: took 17.911572961s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:05:22.403101  566718 start.go:625] kubectl: 1.35.0, cluster: 1.28.0 (minor skew: 7)
	I1219 03:05:22.405195  566718 out.go:203] 
	W1219 03:05:22.406549  566718 out.go:285] ! /usr/local/bin/kubectl is version 1.35.0, which may have incompatibilities with Kubernetes 1.28.0.
	I1219 03:05:22.407721  566718 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1219 03:05:22.409075  566718 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-002036" cluster and "default" namespace by default
	I1219 03:05:17.983934  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:18.483978  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:18.983469  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:19.483031  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:19.983935  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:20.483856  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:20.983202  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:21.482976  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:21.983682  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:22.483477  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:20.246405  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:20.745732  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:21.246211  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:21.745513  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:22.246072  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:22.746161  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:23.245454  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:23.745802  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:24.246011  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:24.745886  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:20.936632  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:21.436414  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:21.937387  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:22.438356  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:22.936036  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:23.436638  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:23.936345  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:24.436285  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:24.936343  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:25.436214  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:22.983526  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:23.483608  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:23.984007  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:24.483768  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:24.983330  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:25.483626  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:25.983245  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:26.483703  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:26.983688  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:27.483645  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:25.245298  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:25.745913  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:26.246357  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:26.746837  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:27.245727  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:27.745064  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:28.245698  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:28.745390  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:29.245749  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:29.746545  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:25.936442  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:26.436179  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:26.936807  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:27.436692  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:27.936427  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:28.436416  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:28.936100  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:29.436165  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:29.936887  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:30.437744  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:27.983729  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:28.484151  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:28.982796  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:29.483575  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:29.983807  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:30.484546  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:30.983311  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:31.482703  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:31.984055  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:32.483041  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:30.245841  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:30.746191  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:31.246984  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:31.746555  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:32.245535  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:32.745692  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:33.245430  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:33.746001  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:34.245532  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:34.745216  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:30.936806  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:31.437044  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:31.937073  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:32.436137  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:32.937365  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:33.436414  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:33.936352  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:34.435813  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:34.936438  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:35.435923  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:32.984055  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:33.483382  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:33.984500  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:34.483032  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:34.984071  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:35.482466  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:35.983161  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:36.482900  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:36.983524  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:37.483681  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:35.245754  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:35.745276  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:36.246044  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:36.747272  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:37.246098  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:37.746535  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:38.245821  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:38.745937  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:39.245762  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:39.745615  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:35.936381  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:36.436916  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:36.936622  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:37.436000  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:37.937259  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:38.437162  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:38.937047  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:39.437352  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:39.936682  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:40.436615  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:37.983600  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:38.483773  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:38.983567  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:39.483752  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:39.983264  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:40.483703  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:40.983322  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:41.483362  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:41.983957  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:42.484274  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:40.246185  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:40.745901  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:41.245621  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:41.746459  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:42.246128  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:42.745336  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:43.245863  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:43.745349  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:44.245848  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:44.745183  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:40.938808  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:41.437447  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:41.936560  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:42.436119  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:42.935681  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:43.436727  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:43.936625  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:44.436379  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:44.936023  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:45.436852  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:42.983002  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:43.484428  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:43.983133  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:44.484439  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:44.983087  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:45.483617  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:45.983057  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:46.483126  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:46.982743  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:47.483122  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:45.245621  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:45.747099  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:46.245089  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:46.746901  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:47.245684  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:47.745166  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:48.245353  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:48.745700  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:49.245083  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:49.745319  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:45.936637  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:46.436382  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:46.935972  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:47.436262  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:47.937175  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:48.435775  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:48.936174  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:49.436927  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:49.936454  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:50.436467  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:47.983769  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:48.483797  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:48.983564  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:49.484562  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:49.983390  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:50.483073  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:50.984121  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:51.482952  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:51.983943  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:52.483850  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:50.245533  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:50.746378  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:51.246407  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:51.746164  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:52.245475  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:52.746473  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:53.245686  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:53.745616  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:54.246701  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:54.746221  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:50.937461  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:51.437102  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:51.937100  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:52.436658  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:52.936554  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:53.436723  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:53.935859  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:54.436301  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:54.936888  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:55.435693  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:52.983429  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:53.484287  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:53.983438  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:54.484255  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:54.982975  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:55.483937  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:55.984116  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:56.483076  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:56.982483  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:57.484172  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:55.245635  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:55.746068  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:56.245613  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:56.746431  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:57.245784  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:57.746179  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:58.246036  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:58.745916  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.246105  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.745511  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:55.936404  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:56.436974  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:56.937181  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:57.436933  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:57.936461  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:58.435893  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:58.936715  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.435977  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.936537  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:00.436413  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:57.984117  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:58.483494  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:58.983431  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.483144  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.983693  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:00.483725  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:00.983769  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:01.483568  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:01.983844  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:02.484041  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:00.247210  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:00.745901  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:01.246917  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:01.746507  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:02.246482  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:02.745791  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:03.246149  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:03.745750  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:04.246542  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:04.746182  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:00.935753  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:01.437052  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:01.936399  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:02.437035  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:02.936175  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:03.437157  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:03.936167  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:04.437079  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:04.936622  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:05.435994  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:02.984052  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:03.484159  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:03.983491  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:04.483027  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:04.984206  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:05.482988  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:05.984416  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:06.482988  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:06.983673  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:07.483363  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:05.245974  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:05.745879  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:06.246325  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:06.746954  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:07.246178  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:07.746530  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:08.246617  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:08.746319  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:09.246086  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:09.745852  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:05.937050  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:06.436626  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:06.935960  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:07.436359  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:07.936462  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:08.436428  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:08.936121  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:09.436717  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:09.936724  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:10.436653  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:07.983609  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:08.483348  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:08.983602  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:09.483970  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:09.984565  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:10.483846  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:10.983764  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:11.483646  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:11.983995  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:12.483230  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:10.246294  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:10.746747  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:11.245812  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:11.746679  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:12.246641  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:12.745478  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:13.245869  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:13.745759  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:14.245568  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:14.746073  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:10.936517  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:11.435795  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:11.937696  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:12.436353  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:12.935510  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:13.436005  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:13.936614  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:14.436666  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:14.937104  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:15.436494  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:12.982961  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:13.483812  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:13.984205  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:14.484367  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:14.983535  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:15.483245  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:15.982974  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:16.483840  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:16.983639  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:17.483076  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:15.245741  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:15.746076  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:16.245863  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:16.746268  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:17.245914  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:17.745460  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:18.246201  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:18.745720  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:19.246075  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:19.746406  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:15.936573  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:16.436355  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:16.935609  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:17.436112  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:17.936695  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:18.436177  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:18.936615  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:19.436180  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:19.936693  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:20.436473  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:17.984187  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:18.484214  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:18.983011  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:19.483899  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:19.984512  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:20.482716  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:20.983406  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:21.483536  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:21.985122  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:22.483290  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:20.246645  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:20.746554  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:21.245477  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:21.746237  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:22.246559  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:22.746156  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:23.245694  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:23.744920  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:24.246400  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:24.745171  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:20.936301  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:21.435818  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:21.936473  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:22.436319  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:22.937291  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:23.436967  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:23.936394  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:24.436573  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:24.936226  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:25.436480  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:22.983215  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:23.483166  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:23.983561  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:24.484067  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:24.983180  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:25.483488  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:25.983441  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:26.482752  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:26.983544  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:27.482808  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:25.245475  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:25.746511  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:26.245967  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:26.746303  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:27.245996  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:27.745286  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:28.246778  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:28.745279  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:29.245781  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:29.745086  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:25.936101  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:26.437131  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:26.936600  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:27.436041  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:27.937177  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:28.437421  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:28.935735  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:29.436019  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:29.936601  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:30.437190  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:27.984252  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:28.483837  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:28.983552  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:29.483241  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:29.983514  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:30.482704  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:30.983246  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:31.482944  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:31.984320  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:32.483797  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:30.246209  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:30.745803  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:31.245503  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:31.746263  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:32.246768  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:32.745863  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:33.245185  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:33.745549  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:34.245747  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:34.746416  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:30.935759  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:31.435954  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:31.936473  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:32.436706  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:32.936420  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:33.436605  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:33.937043  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:34.437152  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:34.936804  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:35.436211  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:32.983286  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:33.483036  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:33.984179  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:34.485767  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:34.983683  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:35.484067  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:35.983146  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:36.483037  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:36.982889  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:37.483748  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:35.245980  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:35.745904  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:36.246066  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:36.747073  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:37.246061  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:37.746010  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:38.246278  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:38.745926  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:39.245654  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:39.745463  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:35.935859  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:36.437530  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:36.936253  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:37.436942  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:37.936253  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:38.437229  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:38.936794  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:39.436501  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:39.936447  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:40.436258  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:37.983789  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:38.483692  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:38.983255  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:39.483001  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:39.982877  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:40.483721  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:40.983399  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:41.482771  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:41.983968  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:42.483847  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:40.246603  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:40.745229  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:41.245985  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:41.746233  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:42.246354  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:42.746354  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:43.245729  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:43.745993  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:44.246174  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:44.745977  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:40.936569  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:41.436604  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:41.936997  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:42.436608  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:42.936332  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:43.436076  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:43.937096  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:44.437052  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:44.936644  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:45.436313  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:42.983561  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:43.483231  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:43.983328  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:44.483130  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:44.983671  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:45.484255  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:45.984498  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:46.483267  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:46.982818  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:47.483172  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:45.246007  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:45.745814  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:46.246281  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:46.746636  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:47.245338  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:47.746505  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:48.246541  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:48.745349  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:49.246003  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:49.746025  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:45.935627  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:46.437425  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:46.937256  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:47.436775  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:47.936905  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:48.436271  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:48.936170  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:49.436681  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:49.937261  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:50.436230  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:47.983908  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:48.483646  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:48.983761  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:49.483697  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:49.983928  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:50.484339  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:50.983038  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:51.483830  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:51.983519  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:52.482841  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:50.246203  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:50.745909  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:51.245212  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:51.746317  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:52.246429  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:52.746706  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:53.245252  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:53.746054  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:54.248935  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:54.745879  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:50.936569  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:51.436150  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:51.937541  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:52.436306  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:52.937380  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:53.437032  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:53.937256  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:54.437101  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:54.936394  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:55.435707  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:52.983425  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:53.482996  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:53.984413  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:54.483150  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:54.983223  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:55.483220  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:55.983167  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:56.482640  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:56.983417  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:57.483783  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:55.245215  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:55.745981  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:56.246277  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:56.747053  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:57.245861  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:57.745707  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:58.245371  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:58.746200  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:59.245515  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:59.745912  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:55.936135  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:56.437841  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:56.936910  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:57.436323  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:57.936660  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:58.436524  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:58.936221  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:59.436563  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:59.935913  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:00.436645  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:57.984125  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:58.483388  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:58.982737  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:59.483773  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:59.983545  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:00.483422  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:00.983154  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:01.483664  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:01.983641  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:02.483442  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:00.245728  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:00.745308  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:01.246025  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:01.745765  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:02.246408  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:02.746848  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:03.245127  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:03.746104  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:04.246223  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:04.745981  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:00.936306  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:01.437231  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:01.937148  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:02.437052  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:02.936729  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:03.436019  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:03.936896  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:04.436656  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:04.936521  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:05.435899  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:02.983225  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:03.483720  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:03.983254  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:04.483258  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:04.983295  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:05.483964  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:05.984348  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:06.483161  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:06.983777  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:07.483360  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:05.245839  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:05.745805  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:06.245976  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:06.745981  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:07.245994  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:07.745694  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:08.245465  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:08.748052  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:09.245632  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:09.745648  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:05.936748  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:06.436721  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:06.935970  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:07.436670  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:07.936857  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:08.436351  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:08.936092  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:09.436265  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:09.936566  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:10.437204  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:07.983552  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:08.483025  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:08.984084  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:09.482696  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:09.984384  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:10.482907  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:10.983542  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:11.483867  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:11.983960  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:12.484193  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:10.246433  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:10.745045  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:11.245844  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:11.745925  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:12.245788  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:12.745757  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:13.245844  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:13.744949  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:14.245762  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:14.745558  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:10.936675  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:11.436272  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:11.937272  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:12.436971  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:12.936377  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:13.435972  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:13.936779  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:14.436521  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:14.936619  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:15.436449  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:12.983555  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:13.483112  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:13.983119  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:14.483571  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:14.983564  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:15.483968  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:15.985107  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:16.482973  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:16.983852  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:17.483706  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:15.246146  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:15.745442  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:16.245478  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:16.745851  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:17.245620  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:17.745179  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:18.245868  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:18.746515  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:19.245146  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:19.746353  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:15.937638  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:16.436053  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:16.936310  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:17.436971  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:17.936846  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:18.436790  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:18.936696  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:19.436200  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:19.936118  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:20.437161  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:17.983286  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:18.483618  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:18.983321  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:19.484098  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:19.982957  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:20.484192  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:20.982797  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:21.483503  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:21.983073  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:22.483344  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:20.246025  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:20.745956  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:21.245670  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:21.745542  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:22.245486  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:22.745743  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:23.246417  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:23.746516  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:24.245958  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:24.746331  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:20.936566  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:21.435804  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:21.936340  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:22.436902  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:22.936275  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:23.437058  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:23.936691  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:24.436512  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:24.936664  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:25.436248  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:22.983164  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:23.483224  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:23.983637  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:24.483793  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:24.983642  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:25.484002  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:25.983546  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:26.483485  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:26.983175  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:27.483045  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:25.246376  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:25.746128  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:26.246162  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:26.747301  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:27.245957  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:27.745993  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:28.245413  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:28.746300  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:29.246016  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:29.745826  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:25.936102  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:26.436787  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:26.936813  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:27.436289  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:27.937146  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:28.437238  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:28.937126  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:29.436740  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:29.936271  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:30.437515  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:27.983720  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:28.484073  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:28.982865  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:29.483679  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:29.983626  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:30.484049  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:30.983790  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:31.483561  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:31.983415  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:32.483614  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:30.245267  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:30.746185  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:31.246095  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:31.746548  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:32.245436  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:32.745151  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:33.246297  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:33.746437  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:34.246245  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:34.746404  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:30.936298  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:31.436831  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:31.936714  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:32.436596  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:32.936067  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:33.436898  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:33.936839  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:34.436572  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:34.936153  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:35.436037  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:32.983547  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:33.483091  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:33.983273  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:34.483523  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:34.982933  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:35.483553  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:35.983907  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:36.484242  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:36.983005  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:37.483666  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:35.245928  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:35.745885  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:36.245358  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:36.747236  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:37.245813  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:37.745544  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:38.245252  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:38.746445  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:39.245380  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:39.746275  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:35.936921  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:36.436862  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:36.936442  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:37.437100  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:37.936746  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:38.436661  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:38.936108  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:39.436741  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:39.937134  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:40.437138  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:37.984072  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:38.483408  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:38.982980  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:39.483839  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:39.983815  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:40.484237  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:40.982748  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:41.483227  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:41.983491  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:42.483502  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:40.246302  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:40.746840  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:41.245743  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:41.745752  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:42.245764  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:42.745565  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:43.245413  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:43.745818  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:44.245622  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:44.746548  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:40.936117  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:41.436793  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:41.937328  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:42.436385  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:42.937184  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:43.437161  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:43.936755  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:44.436384  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:44.937437  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:45.436119  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:42.983935  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:43.483872  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:43.983964  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:44.484354  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:44.983693  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:45.483534  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:45.983273  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:46.483358  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:46.983949  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:47.483681  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:45.245051  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:45.745840  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:46.245710  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:46.747059  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:47.245761  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:47.746224  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:48.245979  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:48.746397  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:49.246462  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:49.745161  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:45.936393  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:46.435574  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:46.936269  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:47.436736  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:47.935923  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:48.436191  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:48.937125  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:49.436724  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:49.936060  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:50.436464  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:47.983875  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:48.483646  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:48.983702  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:49.483743  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:49.983649  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:50.484353  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:50.984025  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:51.484106  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:51.983289  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:52.483003  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:50.245241  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:50.746800  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:51.245636  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:51.745903  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:52.245501  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:52.746786  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:53.245828  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:53.746731  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:54.245243  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:54.746109  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:50.936423  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:51.436185  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:51.937335  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:52.435811  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:52.936607  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:53.437193  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:53.937024  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:54.436703  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:54.936452  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:55.436033  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:52.982921  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:53.483790  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:53.984334  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:54.483331  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:54.983338  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:55.483024  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:55.983619  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:56.483807  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:56.983721  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:57.483219  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:55.245525  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:55.745310  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:56.246066  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:56.748380  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:57.246087  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:57.746200  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:58.246172  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:58.746116  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:59.246000  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:59.745364  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:55.938959  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:56.436375  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:56.936439  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:57.435973  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:57.936388  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:58.435955  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:58.937067  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:59.436689  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:59.936873  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:00.436068  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:57.983216  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:58.483545  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:58.982893  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:59.483703  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:59.983507  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:00.483848  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:00.983741  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:01.483139  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:01.982982  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:02.483474  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:00.245849  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:00.745943  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:01.245514  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:01.745976  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:02.245776  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:02.745774  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:03.246195  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:03.746287  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:04.246174  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:04.745795  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:00.937291  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:01.436517  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:01.937126  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:02.437088  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:02.936378  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:03.435816  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:03.936486  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:04.436861  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:04.936773  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:05.437070  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:02.983196  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:03.482648  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:03.984096  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:04.483607  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:04.983532  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:05.483828  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:05.983686  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:06.484218  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:06.984889  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:07.484117  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:05.245432  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:05.746171  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:06.246148  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:06.746794  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:07.245134  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:07.745858  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:08.245332  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:08.746680  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:09.245744  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:09.745345  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:05.935722  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:06.437147  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:06.937110  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:07.436107  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:07.936683  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:08.437338  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:08.937224  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:09.435895  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:09.936364  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:10.436440  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:07.984241  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:08.483451  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:08.983165  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:09.483042  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:09.982951  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:10.484340  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:10.983004  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:11.483822  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:11.983489  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:12.483877  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:10.246451  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:10.746155  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:11.246021  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:11.745725  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:12.245017  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:12.747153  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:13.246746  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:13.745692  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:14.245869  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:14.745814  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:10.937288  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:11.436218  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:11.937058  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:12.436201  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:12.936942  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:13.436514  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:13.937227  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:14.435900  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:14.937246  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:15.437248  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:12.983685  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:13.483319  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:13.983759  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:14.483672  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:14.983171  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:15.482646  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:15.983174  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:16.483545  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:16.983864  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:17.484102  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:15.245723  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:15.745561  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:16.247817  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:16.747200  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:17.246180  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:17.746059  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:18.245772  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:18.746003  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:19.245769  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:19.745631  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:15.935465  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:16.436710  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:16.936296  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:17.436222  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:17.937015  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:18.437019  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:18.937083  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:19.436796  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:19.936995  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:20.437457  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:17.983571  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:18.483942  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:18.983638  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:19.483595  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:19.982839  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:20.484503  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:20.983773  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:21.483765  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:21.983647  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:22.483706  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:20.246047  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:20.746223  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:21.245764  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:21.746404  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:22.246013  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:22.745963  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:23.245843  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:23.745567  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:24.246427  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:24.746391  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:20.937102  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:21.435564  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:21.936469  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:22.436649  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:22.936880  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:23.436778  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:23.936059  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:24.437189  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:24.937170  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:25.436704  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:22.982868  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:23.484268  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:23.983374  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:24.483212  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:24.983344  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:25.483884  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:25.983398  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:26.484023  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:26.984234  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:27.483988  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:25.246093  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:25.745866  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:26.245647  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:26.747173  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:27.245862  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:27.745538  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:28.245299  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:28.746103  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:29.245506  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:29.746350  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:25.937269  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:26.435729  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:26.936734  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:27.436476  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:27.936918  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:28.436636  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:28.936510  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:29.436255  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:29.936175  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:30.436005  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:27.983312  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:28.484050  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:28.983339  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:29.482531  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:29.982929  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:30.483747  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:30.983500  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:31.482861  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:31.983253  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:32.484296  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:30.245816  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:30.745632  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:31.245311  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:31.746323  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:32.246307  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:32.746634  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:33.245352  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:33.746294  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:34.246399  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:34.746747  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:30.937031  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:31.436676  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:31.936840  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:32.436650  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:32.936793  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:33.436310  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:33.936030  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:34.437178  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:34.937165  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:35.436157  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:32.983447  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:33.484087  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:33.983935  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:34.484195  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:34.982889  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:35.483424  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:35.982827  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:36.483920  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:36.984144  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:37.484302  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:35.245293  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:35.746004  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:36.245793  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:36.746989  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:37.245794  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:37.746839  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:38.245459  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:38.746688  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:39.245861  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:39.745472  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:35.937370  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:36.435903  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:36.936747  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:37.436447  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:37.937054  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:38.437019  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:38.937481  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:39.436189  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:39.936333  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:40.436131  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:37.983136  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:38.484093  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:38.983753  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:39.483392  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:39.983335  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:40.483238  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:40.982643  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:41.483017  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:41.983148  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:42.484213  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:40.245696  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:40.745797  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:41.245831  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:41.745795  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:42.245558  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:42.745449  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:43.246006  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:43.746105  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:44.246305  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:44.746990  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:40.936241  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:41.436869  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:41.936851  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:42.436552  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:42.936544  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:43.436217  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:43.936790  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:44.435881  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:44.937211  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:45.435800  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:42.982609  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:43.483184  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:43.984245  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:44.483444  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:44.983516  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:45.482273  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:45.982784  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:46.483318  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:46.983225  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:47.484299  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:45.245057  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:45.745991  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:46.245705  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:46.746558  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:47.245976  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:47.745649  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:48.245488  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:48.745691  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:49.245062  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:49.745495  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:45.936018  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:46.437324  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:46.936366  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:47.436108  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:47.936330  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:48.435727  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:48.936825  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:49.436120  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:49.937117  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:50.436125  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:47.986039  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:48.483907  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:48.983409  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:49.483362  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:49.982827  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:50.483536  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:50.983035  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:51.483293  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:51.983566  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:52.483534  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:50.246060  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:50.746141  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:51.245517  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:51.745461  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:52.246136  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:52.746190  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:53.246005  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:53.745779  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:54.245690  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:54.746440  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:50.936858  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:51.436399  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:51.936936  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:52.436270  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:52.936040  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:53.436627  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:53.935956  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:54.436964  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:54.937066  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:55.437181  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:52.982975  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:53.483774  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:53.983188  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:54.484313  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:54.983476  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:55.483624  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:55.983235  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:56.484059  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:56.983666  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:57.483836  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:55.246365  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:55.746334  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:56.246033  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:56.746651  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:57.245323  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:57.746357  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:58.245635  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:58.745658  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:59.245395  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:59.745819  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:55.936516  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:56.436483  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:56.936444  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:57.436852  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:57.936892  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:58.436633  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:58.936620  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:59.436269  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:59.936896  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:00.436566  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:57.983297  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:58.484464  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:58.982982  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:59.483511  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:59.982836  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:00.483736  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:00.983424  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:01.483308  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:01.982575  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:02.483472  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:00.245397  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:00.746693  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:01.245417  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:01.745772  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:02.245937  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:02.745980  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:03.245966  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:03.745540  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:04.245125  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:04.746311  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:00.937461  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:01.436345  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:01.937223  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:02.436491  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:02.936542  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:03.436156  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:03.936757  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:04.436434  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:04.936827  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:05.437143  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:02.983140  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:03.483948  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:03.983404  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:04.484135  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:04.983017  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:05.483191  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:05.983258  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:06.483593  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:06.982879  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:07.482719  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:05.245937  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:05.745523  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:06.246156  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:06.746714  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:07.245457  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:07.745845  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:08.245496  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:08.745521  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:09.246211  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:09.745647  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:05.936297  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:06.435928  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:06.936499  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:07.435693  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:07.935885  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:08.436830  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:08.937053  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:09.436174  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:09.936555  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:10.436004  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:07.983540  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:08.483013  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:08.983280  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:09.483326  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:09.983039  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:10.483498  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:10.983057  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:11.483944  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:11.983380  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:12.483057  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:10.246452  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:10.746248  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:11.246124  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:11.746214  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:12.245557  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:12.746434  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:13.245268  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:13.746177  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:14.245924  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:14.747881  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:10.936969  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:11.436315  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:11.936145  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:12.435740  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:12.937011  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:13.437024  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:13.935613  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:14.436125  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:14.937024  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:15.436909  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:12.984340  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:13.483254  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:13.984703  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:14.483313  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:14.982835  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:15.483493  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:15.982869  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:16.483978  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:16.983946  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:17.483204  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:15.245275  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:15.746276  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:16.245920  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:16.746771  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:17.245651  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:17.746110  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:18.245525  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:18.744791  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:19.245637  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:19.745922  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:15.936545  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:16.436043  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:16.937153  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:17.435953  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:17.937080  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:18.435670  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:18.936110  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:19.435657  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:19.935804  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:20.436240  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:17.983897  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:18.483952  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:18.984052  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:19.484088  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:19.983714  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:20.483215  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:20.983277  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:21.483667  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:21.982875  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:22.483370  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:20.245437  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:20.745749  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:21.246263  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:21.746404  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:22.245277  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:22.745620  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:23.245283  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:23.745807  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:24.245525  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:24.745496  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:20.935998  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:21.436702  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:21.936853  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:22.436414  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:22.936508  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:23.435898  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:23.938866  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:24.436406  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:24.936267  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:25.435443  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:22.983387  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:23.483176  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:23.984078  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:24.483842  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:24.983908  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:25.483314  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:25.983685  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:26.482841  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:26.984025  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:27.483709  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:25.246278  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:25.746235  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:26.246283  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:26.746411  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:27.246592  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:27.745927  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:28.245680  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:28.745389  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:29.246386  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:29.745671  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:25.936495  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:26.436178  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:26.937066  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:27.435968  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:27.936852  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:28.436035  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:28.936880  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:29.436057  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:29.936860  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:30.436717  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:27.983478  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:28.483606  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:28.984122  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:29.490050  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:29.982603  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:30.483055  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:30.984015  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:31.483501  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:31.982832  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:32.483241  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:30.245020  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:30.745924  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:31.245930  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:31.745911  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:32.245548  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:32.745201  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:33.245713  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:33.745983  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:34.245893  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:34.745539  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:30.935985  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:31.436747  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:31.936740  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:32.436110  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:32.937088  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:33.436764  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:33.936466  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:34.436043  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:34.936724  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:35.436386  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:32.983173  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:33.483859  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:33.983142  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:34.483205  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:34.984166  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:35.483826  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:35.983185  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:36.484158  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:36.984358  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:37.482832  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:35.246393  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:35.745896  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:36.245850  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:36.746287  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:37.246273  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:37.747864  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:38.245616  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:38.745334  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:39.246449  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:39.744981  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:35.936971  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:36.436804  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:36.936804  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:37.436958  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:37.936877  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:38.436656  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:38.936136  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:39.435670  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:39.935933  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:40.436774  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:37.983744  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:38.482921  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:38.983872  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:39.483540  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:39.984141  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:40.483479  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:40.984063  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:41.483205  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:41.983552  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:42.483481  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:40.245548  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:40.746558  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:41.246611  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:41.745533  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:42.245131  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:42.746326  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:43.246887  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:43.745358  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:44.246189  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:44.745991  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:40.937573  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:41.435677  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:41.936406  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:42.435935  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:42.936714  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:43.436043  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:43.936827  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:44.435885  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:44.936556  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:45.436774  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:42.983487  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:43.483622  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:43.983361  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:44.482912  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:44.983873  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:45.482660  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:45.982839  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:46.483503  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:46.983067  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:47.483638  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:45.245846  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:45.746643  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:46.245931  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:46.746121  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:47.246355  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:47.745777  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:48.245928  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:48.745620  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:49.246014  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:49.745623  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:45.936490  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:46.437169  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:46.936638  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:47.435797  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:47.937106  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:48.436462  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:48.935673  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:49.435921  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:49.936345  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:50.435704  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:47.983064  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:48.483495  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:48.983383  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:49.482815  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:49.983133  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:50.483521  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:50.983458  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:51.483539  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:51.982669  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:52.482740  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:50.245254  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:50.746529  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:51.246403  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:51.746576  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:52.245194  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:52.745901  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:53.245791  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:53.745384  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:54.246056  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:54.745809  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:50.936502  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:51.436533  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:51.936298  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:52.436872  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:52.936965  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:53.436624  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:53.936645  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:54.435868  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:54.936019  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:55.436761  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:52.984260  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:53.483436  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:53.983307  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:54.482909  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:54.983837  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:55.482909  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:55.983703  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:56.483097  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:56.984370  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:57.483476  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:55.245416  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:55.745596  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:56.246315  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:56.746972  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:57.246432  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:57.746169  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:58.245899  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:58.745701  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:59.246684  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:59.746013  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:55.936103  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:56.436731  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:56.936130  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:57.436934  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:57.936650  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:58.435890  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:58.936552  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:59.436324  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:59.936567  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:00.436613  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:57.982857  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:58.483173  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:58.984076  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:59.483622  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:59.983152  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:00.483700  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:00.983409  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:01.483248  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:01.983111  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:02.483698  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:00.245724  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:00.746426  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:01.245360  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:01.746680  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:02.245174  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:02.746009  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:03.246343  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:03.746019  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:04.245779  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:04.745882  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:00.935947  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:01.437327  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:01.937129  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:02.436468  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:02.936473  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:03.436333  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:03.936134  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:04.436385  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:04.937151  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:05.437232  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:02.983942  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:03.483661  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:03.983172  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:04.483536  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:04.983253  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:05.483439  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:05.982645  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:06.483045  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:06.984031  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:07.483303  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:05.245641  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:05.745823  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:06.245494  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:06.746765  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:07.245879  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:07.745869  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:08.245211  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:08.746263  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:09.246504  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:09.744996  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:05.936844  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:06.436478  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:06.935984  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:07.436742  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:07.935862  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:08.436143  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:08.936623  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:09.437102  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:09.936964  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:10.436154  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:07.983001  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:08.483616  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:08.983409  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:09.483478  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:09.982888  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:10.483505  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:10.983487  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:11.482828  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:11.982887  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:12.483514  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:10.245552  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:10.745120  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:11.246143  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:11.746163  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:12.245633  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:12.745368  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:13.246475  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:13.745271  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:14.245933  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:14.745805  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:10.935671  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:11.436335  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:11.936196  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:12.436273  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:12.936625  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:13.436266  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:13.936782  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:14.436189  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:14.936448  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:15.436442  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:12.983418  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:13.483281  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:13.983117  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:14.483767  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:14.984021  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:15.483731  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:15.983275  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:16.483869  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:16.983375  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:17.482882  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:15.245668  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:15.746147  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:16.246640  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:16.746736  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:17.246420  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:17.745966  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:18.246253  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:18.745906  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:19.246303  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:19.745986  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:15.937381  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:16.436018  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:16.936466  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:17.436852  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:17.936227  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:18.437410  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:18.935713  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:19.436449  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:19.935644  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:20.435982  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:17.983311  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:18.483558  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:18.983528  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:19.483170  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:19.984155  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:20.483754  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:20.983412  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:21.483938  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:21.983465  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:22.483020  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:20.245720  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:20.745374  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:21.246756  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:21.745755  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:22.245418  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:22.746818  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:23.245897  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:23.745485  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:24.245161  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:24.746048  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:20.936177  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:21.437209  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:21.936770  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:22.436342  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:22.936061  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:23.436819  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:23.935988  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:24.436564  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:24.935683  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:25.437297  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:22.983512  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:23.484563  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:23.983146  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:24.483790  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:24.983839  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:25.483026  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:25.983149  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:26.484482  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:26.983378  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:27.482721  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:25.246464  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:25.746065  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:26.246367  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:26.746647  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:27.245786  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:27.746272  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:28.245936  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:28.745748  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:29.245512  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:29.745830  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:25.936845  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:26.436141  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:26.937290  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:27.437316  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:27.936601  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:28.435947  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:28.936694  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:29.436517  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:29.936790  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:30.436457  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:27.983105  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:28.483646  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:28.983252  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:29.483908  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:29.983724  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:30.483864  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:30.983787  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:31.483233  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:31.983574  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:32.482995  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:30.245383  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:30.746128  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:31.246198  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:31.746431  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:32.246100  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:32.746119  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:33.246290  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:33.746036  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:34.245863  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:34.745323  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:30.936983  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:31.437037  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:31.936606  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:32.435930  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:32.936507  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:33.436189  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:33.936455  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:34.435839  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:34.935933  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:35.436995  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:32.984129  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:33.484212  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:33.984303  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:34.483306  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:34.983518  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:35.482738  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:35.982612  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:36.483504  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:36.983434  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:37.482971  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:35.246296  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:35.746349  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:36.247475  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:36.746626  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:37.246070  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:37.746520  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:38.245142  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:38.745887  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:39.245695  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:39.745960  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:35.936318  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:36.435767  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:36.936550  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:37.436719  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:37.935917  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:38.435988  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:38.936787  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:39.436849  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:39.935749  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:40.436170  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:37.983100  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:38.483708  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:38.983679  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:39.483567  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:39.983364  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:40.483546  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:40.983622  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:41.484178  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:41.983532  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:42.482768  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:40.245506  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:40.745743  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:41.246985  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:41.746088  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:42.245673  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:42.746257  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:43.246242  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:43.745638  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:44.246113  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:44.745493  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:40.936613  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:41.436769  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:41.937022  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:42.436509  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:42.936170  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:43.436799  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:43.935953  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:44.436096  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:44.936230  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:45.436315  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:42.983678  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:43.482661  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:43.984210  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:44.482755  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:44.983557  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:45.483535  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:45.982947  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:46.483792  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:46.983100  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:47.484233  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:45.246539  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:45.746528  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:46.246174  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:46.746739  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:47.245697  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:47.745790  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:48.245070  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:48.745400  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:49.246339  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:49.745958  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:45.935928  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:46.436394  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:46.936522  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:47.436247  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:47.936524  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:48.436518  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:48.936708  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:49.437978  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:49.936041  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:50.436496  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:47.982762  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:48.483205  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:48.983515  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:49.483024  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:49.982989  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:50.483821  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:50.983511  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:51.482875  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:51.983288  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:52.483464  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:50.245892  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:50.745342  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:51.246251  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:51.746455  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:52.246114  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:52.745679  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:53.243066  568301 kapi.go:81] temporary error: getting Pods with label selector "app.kubernetes.io/name=kubernetes-dashboard-web" : [client rate limiter Wait returned an error: context deadline exceeded]
	I1219 03:10:53.243101  568301 kapi.go:107] duration metric: took 6m0.001125868s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	W1219 03:10:53.243227  568301 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [waiting for app.kubernetes.io/name=kubernetes-dashboard-web pods: context deadline exceeded]
	I1219 03:10:53.244995  568301 out.go:179] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I1219 03:10:53.246175  568301 addons.go:546] duration metric: took 6m5.940868392s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I1219 03:10:53.246216  568301 start.go:247] waiting for cluster config update ...
	I1219 03:10:53.246230  568301 start.go:256] writing updated cluster config ...
	I1219 03:10:53.246533  568301 ssh_runner.go:195] Run: rm -f paused
	I1219 03:10:53.251613  568301 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:10:53.256756  568301 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qmb9z" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:53.261260  568301 pod_ready.go:94] pod "coredns-66bc5c9577-qmb9z" is "Ready"
	I1219 03:10:53.261285  568301 pod_ready.go:86] duration metric: took 4.502294ms for pod "coredns-66bc5c9577-qmb9z" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:53.263432  568301 pod_ready.go:83] waiting for pod "etcd-embed-certs-536489" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:53.267796  568301 pod_ready.go:94] pod "etcd-embed-certs-536489" is "Ready"
	I1219 03:10:53.267819  568301 pod_ready.go:86] duration metric: took 4.363443ms for pod "etcd-embed-certs-536489" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:53.269959  568301 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-536489" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:53.273954  568301 pod_ready.go:94] pod "kube-apiserver-embed-certs-536489" is "Ready"
	I1219 03:10:53.273978  568301 pod_ready.go:86] duration metric: took 3.994974ms for pod "kube-apiserver-embed-certs-536489" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:53.276324  568301 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-536489" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:53.655995  568301 pod_ready.go:94] pod "kube-controller-manager-embed-certs-536489" is "Ready"
	I1219 03:10:53.656024  568301 pod_ready.go:86] duration metric: took 379.67922ms for pod "kube-controller-manager-embed-certs-536489" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:53.856274  568301 pod_ready.go:83] waiting for pod "kube-proxy-qhlhx" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:54.256232  568301 pod_ready.go:94] pod "kube-proxy-qhlhx" is "Ready"
	I1219 03:10:54.256260  568301 pod_ready.go:86] duration metric: took 399.957557ms for pod "kube-proxy-qhlhx" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:54.456456  568301 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-536489" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:54.856278  568301 pod_ready.go:94] pod "kube-scheduler-embed-certs-536489" is "Ready"
	I1219 03:10:54.856307  568301 pod_ready.go:86] duration metric: took 399.821962ms for pod "kube-scheduler-embed-certs-536489" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:54.856318  568301 pod_ready.go:40] duration metric: took 1.60467121s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:10:54.908914  568301 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 03:10:54.910224  568301 out.go:179] * Done! kubectl is now configured to use "embed-certs-536489" cluster and "default" namespace by default
	I1219 03:10:50.936043  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:51.437199  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:51.937554  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:52.436648  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:52.935325  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:53.437090  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:53.936467  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:54.435747  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:54.937514  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:55.437259  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:52.983483  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:53.483110  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:53.984179  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:54.483441  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:54.983571  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:55.482976  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:55.983723  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:56.483799  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:56.983265  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:57.482795  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:57.980094  569947 kapi.go:107] duration metric: took 6m0.000564024s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	W1219 03:10:57.980271  569947 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [waiting for app.kubernetes.io/name=kubernetes-dashboard-web pods: context deadline exceeded]
	I1219 03:10:57.982221  569947 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1219 03:10:57.983556  569947 addons.go:546] duration metric: took 6m7.330731268s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1219 03:10:57.983643  569947 start.go:247] waiting for cluster config update ...
	I1219 03:10:57.983661  569947 start.go:256] writing updated cluster config ...
	I1219 03:10:57.983965  569947 ssh_runner.go:195] Run: rm -f paused
	I1219 03:10:57.988502  569947 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:10:57.993252  569947 pod_ready.go:83] waiting for pod "coredns-7d764666f9-hm5hz" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:57.997922  569947 pod_ready.go:94] pod "coredns-7d764666f9-hm5hz" is "Ready"
	I1219 03:10:57.997946  569947 pod_ready.go:86] duration metric: took 4.66305ms for pod "coredns-7d764666f9-hm5hz" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:58.000317  569947 pod_ready.go:83] waiting for pod "etcd-no-preload-208281" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:58.004843  569947 pod_ready.go:94] pod "etcd-no-preload-208281" is "Ready"
	I1219 03:10:58.004871  569947 pod_ready.go:86] duration metric: took 4.527165ms for pod "etcd-no-preload-208281" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:58.006889  569947 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-208281" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:58.010814  569947 pod_ready.go:94] pod "kube-apiserver-no-preload-208281" is "Ready"
	I1219 03:10:58.010843  569947 pod_ready.go:86] duration metric: took 3.912426ms for pod "kube-apiserver-no-preload-208281" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:58.012893  569947 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-208281" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:58.394606  569947 pod_ready.go:94] pod "kube-controller-manager-no-preload-208281" is "Ready"
	I1219 03:10:58.394643  569947 pod_ready.go:86] duration metric: took 381.720753ms for pod "kube-controller-manager-no-preload-208281" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:58.594310  569947 pod_ready.go:83] waiting for pod "kube-proxy-xst8w" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:58.994002  569947 pod_ready.go:94] pod "kube-proxy-xst8w" is "Ready"
	I1219 03:10:58.994037  569947 pod_ready.go:86] duration metric: took 399.698104ms for pod "kube-proxy-xst8w" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:59.194965  569947 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-208281" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:59.594191  569947 pod_ready.go:94] pod "kube-scheduler-no-preload-208281" is "Ready"
	I1219 03:10:59.594219  569947 pod_ready.go:86] duration metric: took 399.226469ms for pod "kube-scheduler-no-preload-208281" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:59.594230  569947 pod_ready.go:40] duration metric: took 1.605690954s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:10:59.642421  569947 start.go:625] kubectl: 1.35.0, cluster: 1.35.0-rc.1 (minor skew: 0)
	I1219 03:10:59.644674  569947 out.go:179] * Done! kubectl is now configured to use "no-preload-208281" cluster and "default" namespace by default
	I1219 03:10:55.937173  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:56.435825  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:56.936702  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:57.436527  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:57.936442  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:58.436611  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:58.936591  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:59.436321  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:59.937837  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:00.436459  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:00.936639  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:01.437141  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:01.936951  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:02.436292  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:02.936804  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:03.437702  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:03.936237  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:04.436721  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:04.936104  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:05.439639  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:05.936149  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:06.433765  573699 kapi.go:81] temporary error: getting Pods with label selector "app.kubernetes.io/name=kubernetes-dashboard-web" : [client rate limiter Wait returned an error: context deadline exceeded]
	I1219 03:11:06.433806  573699 kapi.go:107] duration metric: took 6m0.001182154s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	W1219 03:11:06.433932  573699 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [waiting for app.kubernetes.io/name=kubernetes-dashboard-web pods: context deadline exceeded]
	I1219 03:11:06.435864  573699 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1219 03:11:06.437280  573699 addons.go:546] duration metric: took 6m7.672932083s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1219 03:11:06.437331  573699 start.go:247] waiting for cluster config update ...
	I1219 03:11:06.437348  573699 start.go:256] writing updated cluster config ...
	I1219 03:11:06.437666  573699 ssh_runner.go:195] Run: rm -f paused
	I1219 03:11:06.441973  573699 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:11:06.446110  573699 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-86vsf" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:06.450837  573699 pod_ready.go:94] pod "coredns-66bc5c9577-86vsf" is "Ready"
	I1219 03:11:06.450868  573699 pod_ready.go:86] duration metric: took 4.729554ms for pod "coredns-66bc5c9577-86vsf" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:06.453222  573699 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-103644" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:06.457430  573699 pod_ready.go:94] pod "etcd-default-k8s-diff-port-103644" is "Ready"
	I1219 03:11:06.457451  573699 pod_ready.go:86] duration metric: took 4.204892ms for pod "etcd-default-k8s-diff-port-103644" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:06.459510  573699 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-103644" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:06.463733  573699 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-103644" is "Ready"
	I1219 03:11:06.463756  573699 pod_ready.go:86] duration metric: took 4.230488ms for pod "kube-apiserver-default-k8s-diff-port-103644" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:06.465771  573699 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-103644" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:06.846433  573699 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-103644" is "Ready"
	I1219 03:11:06.846461  573699 pod_ready.go:86] duration metric: took 380.664307ms for pod "kube-controller-manager-default-k8s-diff-port-103644" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:07.046474  573699 pod_ready.go:83] waiting for pod "kube-proxy-lgw6f" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:07.446485  573699 pod_ready.go:94] pod "kube-proxy-lgw6f" is "Ready"
	I1219 03:11:07.446515  573699 pod_ready.go:86] duration metric: took 400.010893ms for pod "kube-proxy-lgw6f" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:07.647551  573699 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-103644" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:08.046807  573699 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-103644" is "Ready"
	I1219 03:11:08.046840  573699 pod_ready.go:86] duration metric: took 399.227778ms for pod "kube-scheduler-default-k8s-diff-port-103644" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:08.046853  573699 pod_ready.go:40] duration metric: took 1.604833632s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:11:08.095708  573699 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 03:11:08.097778  573699 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-103644" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                       ATTEMPT             POD ID              POD                                          NAMESPACE
	9fd9fcb69c27d       6e38f40d628db       14 minutes ago      Running             storage-provisioner        2                   aae0cdf2362ad       storage-provisioner                          kube-system
	3cdafe1a0b3cd       3a975970da2f5       14 minutes ago      Running             proxy                      0                   dbe3fe0429042       kubernetes-dashboard-kong-78b7499b45-g6tgn   kubernetes-dashboard
	02f3bc754401e       3a975970da2f5       14 minutes ago      Exited              clear-stale-pid            0                   dbe3fe0429042       kubernetes-dashboard-kong-78b7499b45-g6tgn   kubernetes-dashboard
	708506e29f868       a0607af4fcd8a       15 minutes ago      Running             kubernetes-dashboard-api   0                   338c1a70100c2       kubernetes-dashboard-api-99557f86c-cwj8j     kubernetes-dashboard
	36f624d579187       4921d7a6dffa9       15 minutes ago      Running             kindnet-cni                1                   4a03e4967de85       kindnet-zbmbl                                kube-system
	f72629902da50       56cc512116c8f       15 minutes ago      Running             busybox                    1                   9a062f6cd7419       busybox                                      default
	03619448b03a9       6e38f40d628db       15 minutes ago      Exited              storage-provisioner        1                   aae0cdf2362ad       storage-provisioner                          kube-system
	5262f26bad2b0       aa5e3ebc0dfed       15 minutes ago      Running             coredns                    1                   1ab134838ef92       coredns-7d764666f9-hm5hz                     kube-system
	6c79e07745b0b       af0321f3a4f38       15 minutes ago      Running             kube-proxy                 1                   1766b28c41e87       kube-proxy-xst8w                             kube-system
	cacf5e35e790a       73f80cdc073da       15 minutes ago      Running             kube-scheduler             1                   38931718ec045       kube-scheduler-no-preload-208281             kube-system
	fe48441f2b926       5032a56602e1b       15 minutes ago      Running             kube-controller-manager    1                   46efefa83a3c7       kube-controller-manager-no-preload-208281    kube-system
	e2b6de3f6ca9f       0a108f7189562       15 minutes ago      Running             etcd                       1                   7d8e57ec3badf       etcd-no-preload-208281                       kube-system
	496cc0b515ef5       58865405a13bc       15 minutes ago      Running             kube-apiserver             1                   2379cbb88b443       kube-apiserver-no-preload-208281             kube-system
	a698a9bb37123       56cc512116c8f       15 minutes ago      Exited              busybox                    0                   4abbdd6e11aa4       busybox                                      default
	0cbaba368082a       aa5e3ebc0dfed       15 minutes ago      Exited              coredns                    0                   46c9c96ac93e1       coredns-7d764666f9-hm5hz                     kube-system
	6bee3b8cfdfc0       4921d7a6dffa9       15 minutes ago      Exited              kindnet-cni                0                   c90f21626354e       kindnet-zbmbl                                kube-system
	6647bd08b2c7d       af0321f3a4f38       15 minutes ago      Exited              kube-proxy                 0                   467f4389c2fa5       kube-proxy-xst8w                             kube-system
	0457ac1d0e6da       73f80cdc073da       16 minutes ago      Exited              kube-scheduler             0                   7e8fbfab6fa3e       kube-scheduler-no-preload-208281             kube-system
	7dd5f1a15d955       5032a56602e1b       16 minutes ago      Exited              kube-controller-manager    0                   9cda79d7fa6bc       kube-controller-manager-no-preload-208281    kube-system
	06cb2742e807f       58865405a13bc       16 minutes ago      Exited              kube-apiserver             0                   c9445c97a9ea5       kube-apiserver-no-preload-208281             kube-system
	ee999ba4f0b47       0a108f7189562       16 minutes ago      Exited              etcd                       0                   90bbfe2cbe82e       etcd-no-preload-208281                       kube-system
	
	
	==> containerd <==
	Dec 19 03:19:41 no-preload-208281 containerd[451]: time="2025-12-19T03:19:41.527468567Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1aebe15_3209_41e5_9992_da4b0690a286.slice/cri-containerd-f72629902da5012e1c73db0cc2fb5796d6949dd40d2d8d666241679a2eb11c14.scope/hugetlb.1GB.events\""
	Dec 19 03:19:41 no-preload-208281 containerd[451]: time="2025-12-19T03:19:41.528218498Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode43ae2e7891eaa1ff806e636f311fb81.slice/cri-containerd-cacf5e35e790ac46df5f1339f8e234ed8ffd764c954e85131b71bce9fc1293aa.scope/hugetlb.2MB.events\""
	Dec 19 03:19:41 no-preload-208281 containerd[451]: time="2025-12-19T03:19:41.528322276Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode43ae2e7891eaa1ff806e636f311fb81.slice/cri-containerd-cacf5e35e790ac46df5f1339f8e234ed8ffd764c954e85131b71bce9fc1293aa.scope/hugetlb.1GB.events\""
	Dec 19 03:19:51 no-preload-208281 containerd[451]: time="2025-12-19T03:19:51.543772812Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod355754afcd0ce2d7bab6c853c60e836c.slice/cri-containerd-496cc0b515ef5f8631302beefecc3380ae6563f4ed57ee1a286e52a06f30249c.scope/hugetlb.2MB.events\""
	Dec 19 03:19:51 no-preload-208281 containerd[451]: time="2025-12-19T03:19:51.543923755Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod355754afcd0ce2d7bab6c853c60e836c.slice/cri-containerd-496cc0b515ef5f8631302beefecc3380ae6563f4ed57ee1a286e52a06f30249c.scope/hugetlb.1GB.events\""
	Dec 19 03:19:51 no-preload-208281 containerd[451]: time="2025-12-19T03:19:51.544738410Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80442131b1359e6657f2959b40f80467.slice/cri-containerd-fe48441f2b926fe3ab51c21e04ab3ca395718e082d30ebb97fb87169ccd0e569.scope/hugetlb.2MB.events\""
	Dec 19 03:19:51 no-preload-208281 containerd[451]: time="2025-12-19T03:19:51.544871768Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80442131b1359e6657f2959b40f80467.slice/cri-containerd-fe48441f2b926fe3ab51c21e04ab3ca395718e082d30ebb97fb87169ccd0e569.scope/hugetlb.1GB.events\""
	Dec 19 03:19:51 no-preload-208281 containerd[451]: time="2025-12-19T03:19:51.545532666Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24d16e46_3e1f_4d38_a486_8f15642946c7.slice/cri-containerd-6c79e07745b0b6a4cfbe2451c7c287765d16436fb7f8d8ae0bf0a5017b7b3e22.scope/hugetlb.2MB.events\""
	Dec 19 03:19:51 no-preload-208281 containerd[451]: time="2025-12-19T03:19:51.545632370Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24d16e46_3e1f_4d38_a486_8f15642946c7.slice/cri-containerd-6c79e07745b0b6a4cfbe2451c7c287765d16436fb7f8d8ae0bf0a5017b7b3e22.scope/hugetlb.1GB.events\""
	Dec 19 03:19:51 no-preload-208281 containerd[451]: time="2025-12-19T03:19:51.546311212Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59441d91_a2b7_4d87_86d1_5ccaaec4e398.slice/cri-containerd-5262f26bad2b02c527c0b40bd0ffbfc743349345eab765fb7a4a2dc9baa4a4f3.scope/hugetlb.2MB.events\""
	Dec 19 03:19:51 no-preload-208281 containerd[451]: time="2025-12-19T03:19:51.546410696Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59441d91_a2b7_4d87_86d1_5ccaaec4e398.slice/cri-containerd-5262f26bad2b02c527c0b40bd0ffbfc743349345eab765fb7a4a2dc9baa4a4f3.scope/hugetlb.1GB.events\""
	Dec 19 03:19:51 no-preload-208281 containerd[451]: time="2025-12-19T03:19:51.547281146Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1aebe15_3209_41e5_9992_da4b0690a286.slice/cri-containerd-f72629902da5012e1c73db0cc2fb5796d6949dd40d2d8d666241679a2eb11c14.scope/hugetlb.2MB.events\""
	Dec 19 03:19:51 no-preload-208281 containerd[451]: time="2025-12-19T03:19:51.547388862Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1aebe15_3209_41e5_9992_da4b0690a286.slice/cri-containerd-f72629902da5012e1c73db0cc2fb5796d6949dd40d2d8d666241679a2eb11c14.scope/hugetlb.1GB.events\""
	Dec 19 03:19:51 no-preload-208281 containerd[451]: time="2025-12-19T03:19:51.548139599Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode43ae2e7891eaa1ff806e636f311fb81.slice/cri-containerd-cacf5e35e790ac46df5f1339f8e234ed8ffd764c954e85131b71bce9fc1293aa.scope/hugetlb.2MB.events\""
	Dec 19 03:19:51 no-preload-208281 containerd[451]: time="2025-12-19T03:19:51.548244781Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode43ae2e7891eaa1ff806e636f311fb81.slice/cri-containerd-cacf5e35e790ac46df5f1339f8e234ed8ffd764c954e85131b71bce9fc1293aa.scope/hugetlb.1GB.events\""
	Dec 19 03:19:51 no-preload-208281 containerd[451]: time="2025-12-19T03:19:51.548941249Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93a9992ff7a9c41e489b493737b5b488.slice/cri-containerd-e2b6de3f6ca9fb6a4cf59ba73f641b81403ff210fb542805c0ecfb406f229a1a.scope/hugetlb.2MB.events\""
	Dec 19 03:19:51 no-preload-208281 containerd[451]: time="2025-12-19T03:19:51.549025861Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93a9992ff7a9c41e489b493737b5b488.slice/cri-containerd-e2b6de3f6ca9fb6a4cf59ba73f641b81403ff210fb542805c0ecfb406f229a1a.scope/hugetlb.1GB.events\""
	Dec 19 03:19:51 no-preload-208281 containerd[451]: time="2025-12-19T03:19:51.549757014Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5bab6e7d_150b_4c8e_ab0a_933ec046c863.slice/cri-containerd-9fd9fcb69c27dd803192f562a3233b3b5d43391dc2b3ad8eeb73ae2478a8ef20.scope/hugetlb.2MB.events\""
	Dec 19 03:19:51 no-preload-208281 containerd[451]: time="2025-12-19T03:19:51.549838304Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5bab6e7d_150b_4c8e_ab0a_933ec046c863.slice/cri-containerd-9fd9fcb69c27dd803192f562a3233b3b5d43391dc2b3ad8eeb73ae2478a8ef20.scope/hugetlb.1GB.events\""
	Dec 19 03:19:51 no-preload-208281 containerd[451]: time="2025-12-19T03:19:51.550473066Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9548d7ec_c85a_4cd0_8105_d4105327518f.slice/cri-containerd-708506e29f86833f15d101ae0ed3ecddeef0698196d03758d60545d772495dbb.scope/hugetlb.2MB.events\""
	Dec 19 03:19:51 no-preload-208281 containerd[451]: time="2025-12-19T03:19:51.550549346Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9548d7ec_c85a_4cd0_8105_d4105327518f.slice/cri-containerd-708506e29f86833f15d101ae0ed3ecddeef0698196d03758d60545d772495dbb.scope/hugetlb.1GB.events\""
	Dec 19 03:19:51 no-preload-208281 containerd[451]: time="2025-12-19T03:19:51.551264370Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-pode7d80d3e_7bf1_4e49_b7f9_c0911bbae20d.slice/cri-containerd-36f624d579187169ec514783619d4511851864690e5cc61baa5e5cafd8dc3d30.scope/hugetlb.2MB.events\""
	Dec 19 03:19:51 no-preload-208281 containerd[451]: time="2025-12-19T03:19:51.551377373Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-pode7d80d3e_7bf1_4e49_b7f9_c0911bbae20d.slice/cri-containerd-36f624d579187169ec514783619d4511851864690e5cc61baa5e5cafd8dc3d30.scope/hugetlb.1GB.events\""
	Dec 19 03:19:51 no-preload-208281 containerd[451]: time="2025-12-19T03:19:51.552113064Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podabb0b181_97ad_4854_b20a_0c327870fe32.slice/cri-containerd-3cdafe1a0b3cd2742862d68aeb352fb4b6954a0436e9bf279a9ef67a0d7e28a6.scope/hugetlb.2MB.events\""
	Dec 19 03:19:51 no-preload-208281 containerd[451]: time="2025-12-19T03:19:51.552199111Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podabb0b181_97ad_4854_b20a_0c327870fe32.slice/cri-containerd-3cdafe1a0b3cd2742862d68aeb352fb4b6954a0436e9bf279a9ef67a0d7e28a6.scope/hugetlb.1GB.events\""
	
	
	==> coredns [0cbaba368082a3f121bc09e60595d1ff592ec5796ecc2115579e6f149ade94d7] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:45603 - 16917 "HINFO IN 759710811400899281.7107360172383803948. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.035088679s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [5262f26bad2b02c527c0b40bd0ffbfc743349345eab765fb7a4a2dc9baa4a4f3] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:46213 - 43876 "HINFO IN 5483904004133871625.7627938750895341566. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.036456851s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-208281
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-208281
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=no-preload-208281
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T03_03_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 03:03:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-208281
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 03:19:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 03:16:48 +0000   Fri, 19 Dec 2025 03:03:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 03:16:48 +0000   Fri, 19 Dec 2025 03:03:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 03:16:48 +0000   Fri, 19 Dec 2025 03:03:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 03:16:48 +0000   Fri, 19 Dec 2025 03:04:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-208281
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                1c0e3333-d7dc-4f0f-825f-76ec9118fda3
	  Boot ID:                    a0dec9bb-d63c-4dc5-9036-bbcaf9f2c6be
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.2.0
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 coredns-7d764666f9-hm5hz                                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     15m
	  kube-system                 etcd-no-preload-208281                                   100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         16m
	  kube-system                 kindnet-zbmbl                                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      16m
	  kube-system                 kube-apiserver-no-preload-208281                         250m (3%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-no-preload-208281                200m (2%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-xst8w                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-no-preload-208281                         100m (1%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 metrics-server-5d785b57d4-zgcxz                          100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         15m
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kubernetes-dashboard        kubernetes-dashboard-api-99557f86c-cwj8j                 100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     15m
	  kubernetes-dashboard        kubernetes-dashboard-auth-69d44f85cb-ngqw8               100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     15m
	  kubernetes-dashboard        kubernetes-dashboard-kong-78b7499b45-g6tgn               0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-594bbfb84b-2htgn    100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     15m
	  kubernetes-dashboard        kubernetes-dashboard-web-7f7574785f-mh44r                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (16%)  1100m (13%)
	  memory             1220Mi (3%)  1820Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  16m   node-controller  Node no-preload-208281 event: Registered Node no-preload-208281 in Controller
	  Normal  RegisteredNode  15m   node-controller  Node no-preload-208281 event: Registered Node no-preload-208281 in Controller
	
	
	==> dmesg <==
	[Dec19 01:17] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001886] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.395482] i8042: Warning: Keylock active
	[  +0.012710] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497460] block sda: the capability attribute has been deprecated.
	[  +0.080392] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.020963] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.499240] kauditd_printk_skb: 47 callbacks suppressed
	[Dec19 03:03] overlayfs: failed to resolve '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/29/fs': -2
	
	
	==> etcd [e2b6de3f6ca9fb6a4cf59ba73f641b81403ff210fb542805c0ecfb406f229a1a] <==
	{"level":"info","ts":"2025-12-19T03:04:50.259702Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-19T03:04:51.137046Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-19T03:04:51.137165Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-19T03:04:51.137271Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-19T03:04:51.137296Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-19T03:04:51.137318Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-12-19T03:04:51.138300Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-19T03:04:51.138338Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-19T03:04:51.138361Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-12-19T03:04:51.138371Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-19T03:04:51.139075Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:no-preload-208281 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-19T03:04:51.139251Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-19T03:04:51.139298Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-19T03:04:51.140658Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-19T03:04:51.150478Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-19T03:04:51.152337Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-19T03:04:51.154001Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-19T03:04:51.159679Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-19T03:04:51.160172Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-19T03:14:51.185748Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1205}
	{"level":"info","ts":"2025-12-19T03:14:51.206638Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1205,"took":"20.48087ms","hash":2178082393,"current-db-size-bytes":4427776,"current-db-size":"4.4 MB","current-db-size-in-use-bytes":1986560,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2025-12-19T03:14:51.206700Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2178082393,"revision":1205,"compact-revision":-1}
	{"level":"info","ts":"2025-12-19T03:19:51.190458Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1471}
	{"level":"info","ts":"2025-12-19T03:19:51.193684Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1471,"took":"2.826795ms","hash":131670808,"current-db-size-bytes":4427776,"current-db-size":"4.4 MB","current-db-size-in-use-bytes":2269184,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2025-12-19T03:19:51.193728Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":131670808,"revision":1471,"compact-revision":1205}
	
	
	==> etcd [ee999ba4f0b47eadf10730be5384ab5f3b45f01128186cb9ab42b8df5c0b7400] <==
	{"level":"info","ts":"2025-12-19T03:03:51.907125Z","caller":"membership/cluster.go:424","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-19T03:03:52.092230Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-19T03:03:52.092325Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-19T03:03:52.092385Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-12-19T03:03:52.092400Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-19T03:03:52.092420Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-12-19T03:03:52.092901Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-19T03:03:52.092934Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-19T03:03:52.092956Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-12-19T03:03:52.092968Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-19T03:03:52.093621Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:no-preload-208281 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-19T03:03:52.093797Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-19T03:03:52.094157Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-19T03:03:52.094316Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-19T03:03:52.094753Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-19T03:03:52.094788Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-19T03:03:52.095692Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-19T03:03:52.096533Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-19T03:03:52.096740Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-19T03:03:52.097119Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-19T03:03:52.097293Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-19T03:03:52.097472Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-19T03:03:52.097708Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-19T03:03:52.100387Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-19T03:03:52.100520Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 03:20:01 up  2:02,  0 user,  load average: 0.84, 0.71, 3.88
	Linux no-preload-208281 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [36f624d579187169ec514783619d4511851864690e5cc61baa5e5cafd8dc3d30] <==
	I1219 03:17:54.675945       1 main.go:301] handling current node
	I1219 03:18:04.680444       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:18:04.680482       1 main.go:301] handling current node
	I1219 03:18:14.673605       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:18:14.673649       1 main.go:301] handling current node
	I1219 03:18:24.674466       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:18:24.674500       1 main.go:301] handling current node
	I1219 03:18:34.673445       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:18:34.673485       1 main.go:301] handling current node
	I1219 03:18:44.678713       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:18:44.678749       1 main.go:301] handling current node
	I1219 03:18:54.681697       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:18:54.681737       1 main.go:301] handling current node
	I1219 03:19:04.673983       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:19:04.674023       1 main.go:301] handling current node
	I1219 03:19:14.676816       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:19:14.676845       1 main.go:301] handling current node
	I1219 03:19:24.678241       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:19:24.678271       1 main.go:301] handling current node
	I1219 03:19:34.673649       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:19:34.673705       1 main.go:301] handling current node
	I1219 03:19:44.673888       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:19:44.673936       1 main.go:301] handling current node
	I1219 03:19:54.681894       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:19:54.681929       1 main.go:301] handling current node
	
	
	==> kindnet [6bee3b8cfdfc0cfbd9e189118939b5349e2c5e27938c7584f8e1081b62329aa5] <==
	I1219 03:04:06.051097       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1219 03:04:06.051366       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1219 03:04:06.051510       1 main.go:148] setting mtu 1500 for CNI 
	I1219 03:04:06.051536       1 main.go:178] kindnetd IP family: "ipv4"
	I1219 03:04:06.051565       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-19T03:04:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1219 03:04:06.349389       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1219 03:04:06.349419       1 controller.go:381] "Waiting for informer caches to sync"
	I1219 03:04:06.349429       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1219 03:04:06.349652       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1219 03:04:06.649804       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1219 03:04:06.649831       1 metrics.go:72] Registering metrics
	I1219 03:04:06.649914       1 controller.go:711] "Syncing nftables rules"
	I1219 03:04:16.349785       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:04:16.349843       1 main.go:301] handling current node
	I1219 03:04:26.350693       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:04:26.350742       1 main.go:301] handling current node
	
	
	==> kube-apiserver [06cb2742e807f0ab357f122a308cc5bad433f366487ddbbf67177cc3d6f74e2b] <==
	I1219 03:03:56.969284       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1219 03:04:01.627715       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1219 03:04:01.633710       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1219 03:04:01.720366       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1219 03:04:01.820488       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1219 03:04:01.820488       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1219 03:04:29.231885       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:35654: use of closed network connection
	I1219 03:04:29.919479       1 handler.go:304] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1219 03:04:29.924208       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:04:29.924345       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1219 03:04:29.924420       1 handler_proxy.go:143] error resolving kube-system/metrics-server: service "metrics-server" not found
	I1219 03:04:30.004960       1 alloc.go:329] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.107.180.11"}
	W1219 03:04:30.009324       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:04:30.009394       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1219 03:04:30.015660       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:04:30.015711       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	
	
	==> kube-apiserver [496cc0b515ef5f8631302beefecc3380ae6563f4ed57ee1a286e52a06f30249c] <==
	 > logger="UnhandledError"
	I1219 03:15:53.489789       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:17:53.489175       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:17:53.489251       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1219 03:17:53.489271       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:17:53.490341       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:17:53.490442       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1219 03:17:53.490461       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:19:52.493828       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:19:52.493925       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1219 03:19:53.495102       1 handler_proxy.go:99] no RequestInfo found in the context
	W1219 03:19:53.495109       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:19:53.495146       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1219 03:19:53.495158       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1219 03:19:53.495218       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1219 03:19:53.496343       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [7dd5f1a15d9551e729b3a73b8dd9b36565eb7c597d451995076e1f6606444459] <==
	I1219 03:04:00.839117       1 shared_informer.go:377] "Caches are synced"
	I1219 03:04:00.837799       1 shared_informer.go:377] "Caches are synced"
	I1219 03:04:00.837808       1 shared_informer.go:377] "Caches are synced"
	I1219 03:04:00.839137       1 shared_informer.go:377] "Caches are synced"
	I1219 03:04:00.838124       1 shared_informer.go:377] "Caches are synced"
	I1219 03:04:00.837790       1 shared_informer.go:377] "Caches are synced"
	I1219 03:04:00.840052       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 03:04:00.838829       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1219 03:04:00.840148       1 shared_informer.go:377] "Caches are synced"
	I1219 03:04:00.840179       1 shared_informer.go:377] "Caches are synced"
	I1219 03:04:00.840208       1 shared_informer.go:377] "Caches are synced"
	I1219 03:04:00.840419       1 shared_informer.go:377] "Caches are synced"
	I1219 03:04:00.840520       1 range_allocator.go:177] "Sending events to api server"
	I1219 03:04:00.840621       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-208281"
	I1219 03:04:00.840694       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1219 03:04:00.840804       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1219 03:04:00.840892       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 03:04:00.840932       1 shared_informer.go:377] "Caches are synced"
	I1219 03:04:00.844066       1 shared_informer.go:377] "Caches are synced"
	I1219 03:04:00.851874       1 range_allocator.go:433] "Set node PodCIDR" node="no-preload-208281" podCIDRs=["10.244.0.0/24"]
	I1219 03:04:00.936090       1 shared_informer.go:377] "Caches are synced"
	I1219 03:04:00.936112       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1219 03:04:00.936119       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1219 03:04:00.941316       1 shared_informer.go:377] "Caches are synced"
	I1219 03:04:20.843624       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-controller-manager [fe48441f2b926fe3ab51c21e04ab3ca395718e082d30ebb97fb87169ccd0e569] <==
	I1219 03:13:56.941710       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:14:26.874871       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:14:26.950787       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:14:56.880554       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:14:56.957957       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:15:26.886162       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:15:26.966561       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:15:56.891442       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:15:56.974238       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:16:26.896334       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:16:26.980855       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:16:56.900682       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:16:56.988649       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:17:26.904993       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:17:26.996550       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:17:56.909970       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:17:57.004172       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:18:26.913795       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:18:27.011632       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:18:56.918879       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:18:57.019560       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:19:26.923537       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:19:27.028339       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:19:56.929132       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:19:57.036975       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [6647bd08b2c7d47c6eead48e5e683e5121b40b05c0fe31a31cf2329b794cf45e] <==
	I1219 03:04:02.669560       1 server_linux.go:53] "Using iptables proxy"
	I1219 03:04:02.763688       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 03:04:02.864741       1 shared_informer.go:377] "Caches are synced"
	I1219 03:04:02.864778       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1219 03:04:02.864887       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 03:04:02.895089       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1219 03:04:02.895152       1 server_linux.go:136] "Using iptables Proxier"
	I1219 03:04:02.901730       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:04:02.902597       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1219 03:04:02.902656       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:04:02.905212       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:04:02.905267       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:04:02.905299       1 config.go:200] "Starting service config controller"
	I1219 03:04:02.905503       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:04:02.905543       1 config.go:309] "Starting node config controller"
	I1219 03:04:02.905556       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:04:02.905343       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:04:02.905575       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 03:04:02.905613       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:04:03.005940       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 03:04:03.005960       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 03:04:03.006069       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [6c79e07745b0b6a4cfbe2451c7c287765d16436fb7f8d8ae0bf0a5017b7b3e22] <==
	I1219 03:04:53.926249       1 server_linux.go:53] "Using iptables proxy"
	I1219 03:04:54.009323       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 03:04:54.110296       1 shared_informer.go:377] "Caches are synced"
	I1219 03:04:54.110360       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1219 03:04:54.112201       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 03:04:54.144319       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1219 03:04:54.144539       1 server_linux.go:136] "Using iptables Proxier"
	I1219 03:04:54.152380       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:04:54.153129       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1219 03:04:54.153409       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:04:54.158402       1 config.go:200] "Starting service config controller"
	I1219 03:04:54.158424       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:04:54.158455       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:04:54.158461       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:04:54.158725       1 config.go:309] "Starting node config controller"
	I1219 03:04:54.158769       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:04:54.158802       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 03:04:54.159359       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:04:54.159389       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:04:54.259246       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 03:04:54.259286       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 03:04:54.259563       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [0457ac1d0e6da6c1cc69c3583af9c20d14f2274fefa0497dcdce311fa1b7a1d9] <==
	E1219 03:03:54.053347       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1219 03:03:54.053428       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1219 03:03:54.053431       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1219 03:03:54.053480       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1219 03:03:54.054473       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1219 03:03:54.054832       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1219 03:03:54.054918       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1219 03:03:54.930372       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1219 03:03:54.986215       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1219 03:03:55.014573       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1219 03:03:55.022953       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1219 03:03:55.060177       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1219 03:03:55.077789       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1219 03:03:55.092080       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1219 03:03:55.132455       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1219 03:03:55.168987       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1219 03:03:55.187464       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1219 03:03:55.195073       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1219 03:03:55.310657       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1219 03:03:55.319194       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1219 03:03:55.338053       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1219 03:03:55.339019       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1219 03:03:55.400888       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1219 03:03:55.505051       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	I1219 03:03:57.844169       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-scheduler [cacf5e35e790ac46df5f1339f8e234ed8ffd764c954e85131b71bce9fc1293aa] <==
	I1219 03:04:50.553894       1 serving.go:386] Generated self-signed cert in-memory
	W1219 03:04:52.470363       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1219 03:04:52.470518       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1219 03:04:52.470533       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1219 03:04:52.470542       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1219 03:04:52.503045       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1219 03:04:52.503075       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:04:52.505942       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:04:52.505985       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 03:04:52.506691       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1219 03:04:52.506775       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1219 03:04:52.606360       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 19 03:19:18 no-preload-208281 kubelet[575]: E1219 03:19:18.139727     575 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-auth\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-auth:1.4.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-auth:1.4.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-auth/manifests/sha256:53e9917898bf98ff2de91f7f9bdedd3545780eb3ac72158889ae031136e9eeff: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-auth-69d44f85cb-ngqw8" podUID="31ee399e-d9d7-44cb-8ddb-ad815ecf728c"
	Dec 19 03:19:21 no-preload-208281 kubelet[575]: E1219 03:19:21.138388     575 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/metrics-server-5d785b57d4-zgcxz" containerName="metrics-server"
	Dec 19 03:19:21 no-preload-208281 kubelet[575]: E1219 03:19:21.139692     575 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-5d785b57d4-zgcxz" podUID="743fe6aa-308c-4f80-b7f5-c753be058b69"
	Dec 19 03:19:24 no-preload-208281 kubelet[575]: E1219 03:19:24.139659     575 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-web\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-web/manifests/sha256:cc7c31bd2d8470e3590dcb20fe980769b43054b31a5c5c0da606e9add898d85d: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-web-7f7574785f-mh44r" podUID="e17cc5d6-dcc5-48bb-8976-a7ad6acbc1e5"
	Dec 19 03:19:29 no-preload-208281 kubelet[575]: E1219 03:19:29.138871     575 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-2htgn" containerName="kubernetes-dashboard-metrics-scraper"
	Dec 19 03:19:29 no-preload-208281 kubelet[575]: E1219 03:19:29.140025     575 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-metrics-scraper/manifests/sha256:5154b68252bd601cf85092b6413cb9db224af1ef89cb53009d2070dfccd30775: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-2htgn" podUID="55fab15b-99b5-48cb-8d53-90201e74b1a4"
	Dec 19 03:19:30 no-preload-208281 kubelet[575]: E1219 03:19:30.139297     575 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-auth\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-auth:1.4.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-auth:1.4.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-auth/manifests/sha256:53e9917898bf98ff2de91f7f9bdedd3545780eb3ac72158889ae031136e9eeff: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-auth-69d44f85cb-ngqw8" podUID="31ee399e-d9d7-44cb-8ddb-ad815ecf728c"
	Dec 19 03:19:34 no-preload-208281 kubelet[575]: E1219 03:19:34.138300     575 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/metrics-server-5d785b57d4-zgcxz" containerName="metrics-server"
	Dec 19 03:19:34 no-preload-208281 kubelet[575]: E1219 03:19:34.140083     575 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-5d785b57d4-zgcxz" podUID="743fe6aa-308c-4f80-b7f5-c753be058b69"
	Dec 19 03:19:37 no-preload-208281 kubelet[575]: E1219 03:19:37.138968     575 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-web\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-web/manifests/sha256:cc7c31bd2d8470e3590dcb20fe980769b43054b31a5c5c0da606e9add898d85d: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-web-7f7574785f-mh44r" podUID="e17cc5d6-dcc5-48bb-8976-a7ad6acbc1e5"
	Dec 19 03:19:40 no-preload-208281 kubelet[575]: E1219 03:19:40.138726     575 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-208281" containerName="etcd"
	Dec 19 03:19:42 no-preload-208281 kubelet[575]: E1219 03:19:42.138817     575 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-2htgn" containerName="kubernetes-dashboard-metrics-scraper"
	Dec 19 03:19:42 no-preload-208281 kubelet[575]: E1219 03:19:42.139863     575 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-metrics-scraper/manifests/sha256:5154b68252bd601cf85092b6413cb9db224af1ef89cb53009d2070dfccd30775: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-2htgn" podUID="55fab15b-99b5-48cb-8d53-90201e74b1a4"
	Dec 19 03:19:42 no-preload-208281 kubelet[575]: E1219 03:19:42.140097     575 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-auth\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-auth:1.4.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-auth:1.4.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-auth/manifests/sha256:53e9917898bf98ff2de91f7f9bdedd3545780eb3ac72158889ae031136e9eeff: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-auth-69d44f85cb-ngqw8" podUID="31ee399e-d9d7-44cb-8ddb-ad815ecf728c"
	Dec 19 03:19:45 no-preload-208281 kubelet[575]: E1219 03:19:45.138383     575 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/metrics-server-5d785b57d4-zgcxz" containerName="metrics-server"
	Dec 19 03:19:45 no-preload-208281 kubelet[575]: E1219 03:19:45.139443     575 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-5d785b57d4-zgcxz" podUID="743fe6aa-308c-4f80-b7f5-c753be058b69"
	Dec 19 03:19:48 no-preload-208281 kubelet[575]: E1219 03:19:48.138024     575 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-208281" containerName="kube-apiserver"
	Dec 19 03:19:49 no-preload-208281 kubelet[575]: E1219 03:19:49.139268     575 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-web\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-web/manifests/sha256:cc7c31bd2d8470e3590dcb20fe980769b43054b31a5c5c0da606e9add898d85d: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-web-7f7574785f-mh44r" podUID="e17cc5d6-dcc5-48bb-8976-a7ad6acbc1e5"
	Dec 19 03:19:50 no-preload-208281 kubelet[575]: E1219 03:19:50.138467     575 prober_manager.go:209] "Readiness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-g6tgn" containerName="proxy"
	Dec 19 03:19:50 no-preload-208281 kubelet[575]: E1219 03:19:50.138675     575 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-208281" containerName="kube-scheduler"
	Dec 19 03:19:53 no-preload-208281 kubelet[575]: E1219 03:19:53.138257     575 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-2htgn" containerName="kubernetes-dashboard-metrics-scraper"
	Dec 19 03:19:53 no-preload-208281 kubelet[575]: E1219 03:19:53.139493     575 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-metrics-scraper/manifests/sha256:5154b68252bd601cf85092b6413cb9db224af1ef89cb53009d2070dfccd30775: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-2htgn" podUID="55fab15b-99b5-48cb-8d53-90201e74b1a4"
	Dec 19 03:19:56 no-preload-208281 kubelet[575]: E1219 03:19:56.138728     575 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/metrics-server-5d785b57d4-zgcxz" containerName="metrics-server"
	Dec 19 03:19:56 no-preload-208281 kubelet[575]: E1219 03:19:56.139992     575 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-5d785b57d4-zgcxz" podUID="743fe6aa-308c-4f80-b7f5-c753be058b69"
	Dec 19 03:19:57 no-preload-208281 kubelet[575]: E1219 03:19:57.139741     575 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-auth\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-auth:1.4.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-auth:1.4.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-auth/manifests/sha256:53e9917898bf98ff2de91f7f9bdedd3545780eb3ac72158889ae031136e9eeff: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-auth-69d44f85cb-ngqw8" podUID="31ee399e-d9d7-44cb-8ddb-ad815ecf728c"
	
	
	==> kubernetes-dashboard [708506e29f86833f15d101ae0ed3ecddeef0698196d03758d60545d772495dbb] <==
	E1219 03:08:01.767380       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:08:31.770858       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:09:01.774688       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:09:31.777517       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:10:01.780670       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:10:31.783930       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:11:01.787737       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:11:31.791468       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:12:01.795535       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:12:31.798395       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:13:01.802232       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:13:31.805509       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:14:01.808796       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:14:31.812276       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:15:01.815874       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:15:31.819560       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:16:01.822399       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:16:31.825653       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:17:01.829174       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:17:31.831895       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:18:01.835241       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:18:31.837889       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:19:01.841322       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:19:31.844048       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:20:01.846708       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [03619448b03a9d04f872cfb774d4d982be35c7d2aa5ec4e413e4952934532bf3] <==
	I1219 03:04:53.956199       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1219 03:05:23.958855       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [9fd9fcb69c27dd803192f562a3233b3b5d43391dc2b3ad8eeb73ae2478a8ef20] <==
	W1219 03:19:37.044666       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:39.047952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:39.053207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:41.055861       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:41.059828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:43.063417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:43.068589       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:45.071318       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:45.075126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:47.078772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:47.083665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:49.087114       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:49.091128       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:51.093998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:51.098915       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:53.102728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:53.106838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:55.110308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:55.115012       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:57.117733       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:57.121659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:59.124385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:59.129173       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:20:01.132858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:20:01.137676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-208281 -n no-preload-208281
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-208281 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: metrics-server-5d785b57d4-zgcxz kubernetes-dashboard-auth-69d44f85cb-ngqw8 kubernetes-dashboard-metrics-scraper-594bbfb84b-2htgn kubernetes-dashboard-web-7f7574785f-mh44r
helpers_test.go:283: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context no-preload-208281 describe pod metrics-server-5d785b57d4-zgcxz kubernetes-dashboard-auth-69d44f85cb-ngqw8 kubernetes-dashboard-metrics-scraper-594bbfb84b-2htgn kubernetes-dashboard-web-7f7574785f-mh44r
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context no-preload-208281 describe pod metrics-server-5d785b57d4-zgcxz kubernetes-dashboard-auth-69d44f85cb-ngqw8 kubernetes-dashboard-metrics-scraper-594bbfb84b-2htgn kubernetes-dashboard-web-7f7574785f-mh44r: exit status 1 (62.675988ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5d785b57d4-zgcxz" not found
	Error from server (NotFound): pods "kubernetes-dashboard-auth-69d44f85cb-ngqw8" not found
	Error from server (NotFound): pods "kubernetes-dashboard-metrics-scraper-594bbfb84b-2htgn" not found
	Error from server (NotFound): pods "kubernetes-dashboard-web-7f7574785f-mh44r" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context no-preload-208281 describe pod metrics-server-5d785b57d4-zgcxz kubernetes-dashboard-auth-69d44f85cb-ngqw8 kubernetes-dashboard-metrics-scraper-594bbfb84b-2htgn kubernetes-dashboard-web-7f7574785f-mh44r: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1219 03:11:19.191014  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-453239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:11:42.240321  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-180941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:13:12.809444  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/addons-367973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:13:39.193225  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-180941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:14:22.239912  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-453239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-103644 -n default-k8s-diff-port-103644
start_stop_delete_test.go:272: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-12-19 03:20:08.816035524 +0000 UTC m=+3285.809159725
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-103644
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-103644:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "11c02a7b93988dbed616b611ca923dcf4724191c862108f677f27c4139daa4c9",
	        "Created": "2025-12-19T03:03:44.128298232Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 574453,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-19T03:04:51.023922591Z",
	            "FinishedAt": "2025-12-19T03:04:49.466246851Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/11c02a7b93988dbed616b611ca923dcf4724191c862108f677f27c4139daa4c9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/11c02a7b93988dbed616b611ca923dcf4724191c862108f677f27c4139daa4c9/hostname",
	        "HostsPath": "/var/lib/docker/containers/11c02a7b93988dbed616b611ca923dcf4724191c862108f677f27c4139daa4c9/hosts",
	        "LogPath": "/var/lib/docker/containers/11c02a7b93988dbed616b611ca923dcf4724191c862108f677f27c4139daa4c9/11c02a7b93988dbed616b611ca923dcf4724191c862108f677f27c4139daa4c9-json.log",
	        "Name": "/default-k8s-diff-port-103644",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-103644:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-103644",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "11c02a7b93988dbed616b611ca923dcf4724191c862108f677f27c4139daa4c9",
	                "LowerDir": "/var/lib/docker/overlay2/910cd6f0965b719597314beaf97ec867b7fd2c394e2d54f04ca8b78b3b843d35-init/diff:/var/lib/docker/overlay2/68e8325308c9e4650215fd35d4b00e1f54e6ac5929641a1bc8ed2d512448afbd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/910cd6f0965b719597314beaf97ec867b7fd2c394e2d54f04ca8b78b3b843d35/merged",
	                "UpperDir": "/var/lib/docker/overlay2/910cd6f0965b719597314beaf97ec867b7fd2c394e2d54f04ca8b78b3b843d35/diff",
	                "WorkDir": "/var/lib/docker/overlay2/910cd6f0965b719597314beaf97ec867b7fd2c394e2d54f04ca8b78b3b843d35/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-103644",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-103644/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-103644",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-103644",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-103644",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "b38f0ca163c1f6cea6d42444c01079612af48be1fb169bb1f4a9bfd3afff4f26",
	            "SandboxKey": "/var/run/docker/netns/b38f0ca163c1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-103644": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b673923273edc2de2d190d760404bd86e1a35010cdce8800eb3623a9ac5b14fd",
	                    "EndpointID": "ad847978d54254c34f0d29018c8c8fef26d9735df8d4399060c7b0455bfafb6f",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "a6:4e:56:15:10:87",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-103644",
	                        "11c02a7b9398"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-103644 -n default-k8s-diff-port-103644
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-103644 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-103644 logs -n 25: (1.507543502s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ delete  │ -p cert-options-967008                                                                                                                                                                                                                              │ cert-options-967008          │ jenkins │ v1.37.0 │ 19 Dec 25 03:03 UTC │ 19 Dec 25 03:03 UTC │
	│ start   │ -p NoKubernetes-821572 --driver=docker  --container-runtime=containerd                                                                                                                                                                              │ NoKubernetes-821572          │ jenkins │ v1.37.0 │ 19 Dec 25 03:03 UTC │ 19 Dec 25 03:03 UTC │
	│ start   │ -p no-preload-208281 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-208281            │ jenkins │ v1.37.0 │ 19 Dec 25 03:03 UTC │ 19 Dec 25 03:04 UTC │
	│ delete  │ -p kubernetes-upgrade-340572                                                                                                                                                                                                                        │ kubernetes-upgrade-340572    │ jenkins │ v1.37.0 │ 19 Dec 25 03:03 UTC │ 19 Dec 25 03:03 UTC │
	│ start   │ -p embed-certs-536489 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                        │ embed-certs-536489           │ jenkins │ v1.37.0 │ 19 Dec 25 03:03 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p NoKubernetes-821572 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-821572          │ jenkins │ v1.37.0 │ 19 Dec 25 03:03 UTC │                     │
	│ delete  │ -p NoKubernetes-821572                                                                                                                                                                                                                              │ NoKubernetes-821572          │ jenkins │ v1.37.0 │ 19 Dec 25 03:03 UTC │ 19 Dec 25 03:03 UTC │
	│ delete  │ -p disable-driver-mounts-443690                                                                                                                                                                                                                     │ disable-driver-mounts-443690 │ jenkins │ v1.37.0 │ 19 Dec 25 03:03 UTC │ 19 Dec 25 03:03 UTC │
	│ start   │ -p default-k8s-diff-port-103644 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-103644 │ jenkins │ v1.37.0 │ 19 Dec 25 03:03 UTC │ 19 Dec 25 03:04 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-002036 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-002036       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ stop    │ -p old-k8s-version-002036 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-002036       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ addons  │ enable metrics-server -p embed-certs-536489 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ embed-certs-536489           │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ stop    │ -p embed-certs-536489 --alsologtostderr -v=3                                                                                                                                                                                                        │ embed-certs-536489           │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ addons  │ enable metrics-server -p no-preload-208281 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-208281            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ stop    │ -p no-preload-208281 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-208281            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-002036 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-002036       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p old-k8s-version-002036 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-002036       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-103644 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ default-k8s-diff-port-103644 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ stop    │ -p default-k8s-diff-port-103644 --alsologtostderr -v=3                                                                                                                                                                                              │ default-k8s-diff-port-103644 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ addons  │ enable dashboard -p embed-certs-536489 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-536489           │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p embed-certs-536489 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                        │ embed-certs-536489           │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:10 UTC │
	│ addons  │ enable dashboard -p no-preload-208281 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-208281            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p no-preload-208281 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-208281            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:10 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-103644 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ default-k8s-diff-port-103644 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p default-k8s-diff-port-103644 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-103644 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:11 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 03:04:50
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 03:04:50.472071  573699 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:04:50.472443  573699 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:04:50.472454  573699 out.go:374] Setting ErrFile to fd 2...
	I1219 03:04:50.472463  573699 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:04:50.473301  573699 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-253859/.minikube/bin
	I1219 03:04:50.474126  573699 out.go:368] Setting JSON to false
	I1219 03:04:50.476304  573699 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6429,"bootTime":1766107061,"procs":363,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 03:04:50.476440  573699 start.go:143] virtualization: kvm guest
	I1219 03:04:50.478144  573699 out.go:179] * [default-k8s-diff-port-103644] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 03:04:50.479945  573699 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 03:04:50.480003  573699 notify.go:221] Checking for updates...
	I1219 03:04:50.482332  573699 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 03:04:50.483901  573699 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-253859/kubeconfig
	I1219 03:04:50.485635  573699 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-253859/.minikube
	I1219 03:04:50.489602  573699 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 03:04:50.493460  573699 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 03:04:48.691145  569947 cli_runner.go:164] Run: docker network inspect no-preload-208281 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1219 03:04:48.711282  569947 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1219 03:04:48.716221  569947 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:04:48.729144  569947 kubeadm.go:884] updating cluster {Name:no-preload-208281 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-208281 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSi
ze:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 03:04:48.729324  569947 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1219 03:04:48.729375  569947 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:04:48.763109  569947 containerd.go:627] all images are preloaded for containerd runtime.
	I1219 03:04:48.763136  569947 cache_images.go:86] Images are preloaded, skipping loading
	I1219 03:04:48.763146  569947 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1219 03:04:48.763264  569947 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-208281 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-208281 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 03:04:48.763347  569947 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1219 03:04:48.796269  569947 cni.go:84] Creating CNI manager for ""
	I1219 03:04:48.796300  569947 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1219 03:04:48.796329  569947 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1219 03:04:48.796369  569947 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-208281 NodeName:no-preload-208281 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 03:04:48.796558  569947 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-208281"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 03:04:48.796669  569947 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1219 03:04:48.808026  569947 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 03:04:48.808102  569947 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 03:04:48.819240  569947 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1219 03:04:48.836384  569947 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1219 03:04:48.852550  569947 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2234 bytes)
	I1219 03:04:48.869275  569947 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1219 03:04:48.873704  569947 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:04:48.886490  569947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:04:48.994443  569947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:04:49.020494  569947 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/no-preload-208281 for IP: 192.168.85.2
	I1219 03:04:49.020518  569947 certs.go:195] generating shared ca certs ...
	I1219 03:04:49.020533  569947 certs.go:227] acquiring lock for ca certs: {Name:mk50e31410087b5c6cdb0986368a8c9100618403 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:04:49.020722  569947 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-253859/.minikube/ca.key
	I1219 03:04:49.020809  569947 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-253859/.minikube/proxy-client-ca.key
	I1219 03:04:49.020826  569947 certs.go:257] generating profile certs ...
	I1219 03:04:49.020975  569947 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/no-preload-208281/client.key
	I1219 03:04:49.021064  569947 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/no-preload-208281/apiserver.key.8f504093
	I1219 03:04:49.021159  569947 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/no-preload-208281/proxy-client.key
	I1219 03:04:49.021324  569947 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/257493.pem (1338 bytes)
	W1219 03:04:49.021373  569947 certs.go:480] ignoring /home/jenkins/minikube-integration/22230-253859/.minikube/certs/257493_empty.pem, impossibly tiny 0 bytes
	I1219 03:04:49.021389  569947 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca-key.pem (1679 bytes)
	I1219 03:04:49.021430  569947 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca.pem (1078 bytes)
	I1219 03:04:49.021457  569947 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/cert.pem (1123 bytes)
	I1219 03:04:49.021480  569947 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/key.pem (1675 bytes)
	I1219 03:04:49.021525  569947 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/files/etc/ssl/certs/2574932.pem (1708 bytes)
	I1219 03:04:49.022292  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 03:04:49.050958  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 03:04:49.072475  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 03:04:49.095867  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1219 03:04:49.124289  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/no-preload-208281/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1219 03:04:49.150664  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/no-preload-208281/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1219 03:04:49.188239  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/no-preload-208281/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 03:04:49.216791  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/no-preload-208281/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1219 03:04:49.242767  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 03:04:49.264732  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/certs/257493.pem --> /usr/share/ca-certificates/257493.pem (1338 bytes)
	I1219 03:04:49.286635  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/files/etc/ssl/certs/2574932.pem --> /usr/share/ca-certificates/2574932.pem (1708 bytes)
	I1219 03:04:49.313716  569947 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 03:04:49.329405  569947 ssh_runner.go:195] Run: openssl version
	I1219 03:04:49.337082  569947 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:04:49.347002  569947 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 03:04:49.355979  569947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:04:49.360975  569947 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:26 /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:04:49.361048  569947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:04:49.457547  569947 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 03:04:49.470846  569947 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/257493.pem
	I1219 03:04:49.484764  569947 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/257493.pem /etc/ssl/certs/257493.pem
	I1219 03:04:49.501564  569947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/257493.pem
	I1219 03:04:49.510435  569947 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 19 02:34 /usr/share/ca-certificates/257493.pem
	I1219 03:04:49.510523  569947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/257493.pem
	I1219 03:04:49.583657  569947 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1219 03:04:49.596341  569947 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2574932.pem
	I1219 03:04:49.615267  569947 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2574932.pem /etc/ssl/certs/2574932.pem
	I1219 03:04:49.637741  569947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2574932.pem
	I1219 03:04:49.651506  569947 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 19 02:34 /usr/share/ca-certificates/2574932.pem
	I1219 03:04:49.651606  569947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2574932.pem
	I1219 03:04:49.719393  569947 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1219 03:04:49.738446  569947 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 03:04:49.759885  569947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1219 03:04:49.839963  569947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1219 03:04:49.916940  569947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1219 03:04:49.984478  569947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1219 03:04:50.052790  569947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1219 03:04:50.213057  569947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1219 03:04:50.323267  569947 kubeadm.go:401] StartCluster: {Name:no-preload-208281 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-208281 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:04:50.323602  569947 cri.go:57] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1219 03:04:50.323919  569947 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:04:50.475134  569947 cri.go:92] found id: "cacf5e35e790ac46df5f1339f8e234ed8ffd764c954e85131b71bce9fc1293aa"
	I1219 03:04:50.475159  569947 cri.go:92] found id: "fe48441f2b926fe3ab51c21e04ab3ca395718e082d30ebb97fb87169ccd0e569"
	I1219 03:04:50.475166  569947 cri.go:92] found id: "e2b6de3f6ca9fb6a4cf59ba73f641b81403ff210fb542805c0ecfb406f229a1a"
	I1219 03:04:50.475171  569947 cri.go:92] found id: "496cc0b515ef5f8631302beefecc3380ae6563f4ed57ee1a286e52a06f30249c"
	I1219 03:04:50.475175  569947 cri.go:92] found id: "0cbaba368082a3f121bc09e60595d1ff592ec5796ecc2115579e6f149ade94d7"
	I1219 03:04:50.475180  569947 cri.go:92] found id: "1b139b90f72cc73cf0a391fb1b6dde88df245b3d92b6a686104996e14c38330c"
	I1219 03:04:50.475184  569947 cri.go:92] found id: "6bee3b8cfdfc0cfbd9e189118939b5349e2c5e27938c7584f8e1081b62329aa5"
	I1219 03:04:50.475506  569947 cri.go:92] found id: "6647bd08b2c7d47c6eead48e5e683e5121b40b05c0fe31a31cf2329b794cf45e"
	I1219 03:04:50.475532  569947 cri.go:92] found id: "0457ac1d0e6da6c1cc69c3583af9c20d14f2274fefa0497dcdce311fa1b7a1d9"
	I1219 03:04:50.475549  569947 cri.go:92] found id: "7dd5f1a15d9551e729b3a73b8dd9b36565eb7c597d451995076e1f6606444459"
	I1219 03:04:50.475553  569947 cri.go:92] found id: "06cb2742e807f0ab357f122a308cc5bad433f366487ddbbf67177cc3d6f74e2b"
	I1219 03:04:50.475557  569947 cri.go:92] found id: "ee999ba4f0b47eadf10730be5384ab5f3b45f01128186cb9ab42b8df5c0b7400"
	I1219 03:04:50.475562  569947 cri.go:92] found id: ""
	I1219 03:04:50.475632  569947 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1219 03:04:50.558499  569947 cri.go:119] JSON = [{"ociVersion":"1.2.1","id":"2379cbb88b4433650ade336b6320fe20a8e4907825957318999810a514165e6e","pid":805,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2379cbb88b4433650ade336b6320fe20a8e4907825957318999810a514165e6e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2379cbb88b4433650ade336b6320fe20a8e4907825957318999810a514165e6e/rootfs","created":"2025-12-19T03:04:49.720787385Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"2379cbb88b4433650ade336b6320fe20a8e4907825957318999810a514165e6e","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-no-preload-208281_355754afcd0ce2d7bab6c853c60e836c","io.kubernetes.cri.sandbox-memor
y":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-no-preload-208281","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"355754afcd0ce2d7bab6c853c60e836c"},"owner":"root"},{"ociVersion":"1.2.1","id":"38931718ec045783470e9a1d913a9a9c05077db3a90703385f4fb4a1667664f2","pid":857,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/38931718ec045783470e9a1d913a9a9c05077db3a90703385f4fb4a1667664f2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/38931718ec045783470e9a1d913a9a9c05077db3a90703385f4fb4a1667664f2/rootfs","created":"2025-12-19T03:04:49.778097457Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"38931718ec045783470e9a1d913a9a9c05077db3a90703385f4fb4a1667664f2","io.kubernetes.c
ri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-no-preload-208281_e43ae2e7891eaa1ff806e636f311fb81","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-no-preload-208281","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"e43ae2e7891eaa1ff806e636f311fb81"},"owner":"root"},{"ociVersion":"1.2.1","id":"46efefa83a3c7ef9fc0acf51455ccd0f9b6e6fce80a57e43de82b11915e2ee07","pid":838,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/46efefa83a3c7ef9fc0acf51455ccd0f9b6e6fce80a57e43de82b11915e2ee07","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/46efefa83a3c7ef9fc0acf51455ccd0f9b6e6fce80a57e43de82b11915e2ee07/rootfs","created":"2025-12-19T03:04:49.777265025Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kub
ernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"46efefa83a3c7ef9fc0acf51455ccd0f9b6e6fce80a57e43de82b11915e2ee07","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-no-preload-208281_80442131b1359e6657f2959b40f80467","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-no-preload-208281","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"80442131b1359e6657f2959b40f80467"},"owner":"root"},{"ociVersion":"1.2.1","id":"496cc0b515ef5f8631302beefecc3380ae6563f4ed57ee1a286e52a06f30249c","pid":902,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/496cc0b515ef5f8631302beefecc3380ae6563f4ed57ee1a286e52a06f30249c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/496cc0b515ef5f8631302beefecc3380ae6563f4ed57ee1a286e52a06f30249c/rootfs","created":"2025-12-19T03:04:49.944110218Z","annotations":{"io.kubernetes.cri.container-name":"kube-apis
erver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.35.0-rc.1","io.kubernetes.cri.sandbox-id":"2379cbb88b4433650ade336b6320fe20a8e4907825957318999810a514165e6e","io.kubernetes.cri.sandbox-name":"kube-apiserver-no-preload-208281","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"355754afcd0ce2d7bab6c853c60e836c"},"owner":"root"},{"ociVersion":"1.2.1","id":"7d8e57ec3badf4087436cc732a67ce490fa8a67cdaa8f122d22d1d2973dd4db3","pid":845,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7d8e57ec3badf4087436cc732a67ce490fa8a67cdaa8f122d22d1d2973dd4db3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7d8e57ec3badf4087436cc732a67ce490fa8a67cdaa8f122d22d1d2973dd4db3/rootfs","created":"2025-12-19T03:04:49.76636358Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-c
pu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"7d8e57ec3badf4087436cc732a67ce490fa8a67cdaa8f122d22d1d2973dd4db3","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-no-preload-208281_93a9992ff7a9c41e489b493737b5b488","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-no-preload-208281","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"93a9992ff7a9c41e489b493737b5b488"},"owner":"root"},{"ociVersion":"1.2.1","id":"cacf5e35e790ac46df5f1339f8e234ed8ffd764c954e85131b71bce9fc1293aa","pid":964,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cacf5e35e790ac46df5f1339f8e234ed8ffd764c954e85131b71bce9fc1293aa","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cacf5e35e790ac46df5f1339f8e234ed8ffd764c954e85131b71bce9fc1293aa/rootfs","created":"2025-12-19T03:04:50.065275653Z","annotations":{"io.kubernetes
.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.35.0-rc.1","io.kubernetes.cri.sandbox-id":"38931718ec045783470e9a1d913a9a9c05077db3a90703385f4fb4a1667664f2","io.kubernetes.cri.sandbox-name":"kube-scheduler-no-preload-208281","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"e43ae2e7891eaa1ff806e636f311fb81"},"owner":"root"},{"ociVersion":"1.2.1","id":"e2b6de3f6ca9fb6a4cf59ba73f641b81403ff210fb542805c0ecfb406f229a1a","pid":928,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e2b6de3f6ca9fb6a4cf59ba73f641b81403ff210fb542805c0ecfb406f229a1a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e2b6de3f6ca9fb6a4cf59ba73f641b81403ff210fb542805c0ecfb406f229a1a/rootfs","created":"2025-12-19T03:04:50.024946214Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-
name":"registry.k8s.io/etcd:3.6.6-0","io.kubernetes.cri.sandbox-id":"7d8e57ec3badf4087436cc732a67ce490fa8a67cdaa8f122d22d1d2973dd4db3","io.kubernetes.cri.sandbox-name":"etcd-no-preload-208281","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"93a9992ff7a9c41e489b493737b5b488"},"owner":"root"},{"ociVersion":"1.2.1","id":"fe48441f2b926fe3ab51c21e04ab3ca395718e082d30ebb97fb87169ccd0e569","pid":979,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe48441f2b926fe3ab51c21e04ab3ca395718e082d30ebb97fb87169ccd0e569","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe48441f2b926fe3ab51c21e04ab3ca395718e082d30ebb97fb87169ccd0e569/rootfs","created":"2025-12-19T03:04:50.153274168Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.35.0-rc.1","io.kubernetes.cri.sandbox-id":"46efefa83a3c7ef9fc0acf5
1455ccd0f9b6e6fce80a57e43de82b11915e2ee07","io.kubernetes.cri.sandbox-name":"kube-controller-manager-no-preload-208281","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"80442131b1359e6657f2959b40f80467"},"owner":"root"}]
	I1219 03:04:50.559253  569947 cri.go:129] list returned 8 containers
	I1219 03:04:50.559288  569947 cri.go:132] container: {ID:2379cbb88b4433650ade336b6320fe20a8e4907825957318999810a514165e6e Status:running}
	I1219 03:04:50.559310  569947 cri.go:134] skipping 2379cbb88b4433650ade336b6320fe20a8e4907825957318999810a514165e6e - not in ps
	I1219 03:04:50.559318  569947 cri.go:132] container: {ID:38931718ec045783470e9a1d913a9a9c05077db3a90703385f4fb4a1667664f2 Status:running}
	I1219 03:04:50.559326  569947 cri.go:134] skipping 38931718ec045783470e9a1d913a9a9c05077db3a90703385f4fb4a1667664f2 - not in ps
	I1219 03:04:50.559332  569947 cri.go:132] container: {ID:46efefa83a3c7ef9fc0acf51455ccd0f9b6e6fce80a57e43de82b11915e2ee07 Status:running}
	I1219 03:04:50.559338  569947 cri.go:134] skipping 46efefa83a3c7ef9fc0acf51455ccd0f9b6e6fce80a57e43de82b11915e2ee07 - not in ps
	I1219 03:04:50.559343  569947 cri.go:132] container: {ID:496cc0b515ef5f8631302beefecc3380ae6563f4ed57ee1a286e52a06f30249c Status:running}
	I1219 03:04:50.559363  569947 cri.go:138] skipping {496cc0b515ef5f8631302beefecc3380ae6563f4ed57ee1a286e52a06f30249c running}: state = "running", want "paused"
	I1219 03:04:50.559373  569947 cri.go:132] container: {ID:7d8e57ec3badf4087436cc732a67ce490fa8a67cdaa8f122d22d1d2973dd4db3 Status:running}
	I1219 03:04:50.559381  569947 cri.go:134] skipping 7d8e57ec3badf4087436cc732a67ce490fa8a67cdaa8f122d22d1d2973dd4db3 - not in ps
	I1219 03:04:50.559386  569947 cri.go:132] container: {ID:cacf5e35e790ac46df5f1339f8e234ed8ffd764c954e85131b71bce9fc1293aa Status:running}
	I1219 03:04:50.559393  569947 cri.go:138] skipping {cacf5e35e790ac46df5f1339f8e234ed8ffd764c954e85131b71bce9fc1293aa running}: state = "running", want "paused"
	I1219 03:04:50.559400  569947 cri.go:132] container: {ID:e2b6de3f6ca9fb6a4cf59ba73f641b81403ff210fb542805c0ecfb406f229a1a Status:running}
	I1219 03:04:50.559406  569947 cri.go:138] skipping {e2b6de3f6ca9fb6a4cf59ba73f641b81403ff210fb542805c0ecfb406f229a1a running}: state = "running", want "paused"
	I1219 03:04:50.559412  569947 cri.go:132] container: {ID:fe48441f2b926fe3ab51c21e04ab3ca395718e082d30ebb97fb87169ccd0e569 Status:running}
	I1219 03:04:50.559419  569947 cri.go:138] skipping {fe48441f2b926fe3ab51c21e04ab3ca395718e082d30ebb97fb87169ccd0e569 running}: state = "running", want "paused"
	I1219 03:04:50.559472  569947 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1219 03:04:50.576564  569947 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1219 03:04:50.576683  569947 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1219 03:04:50.576777  569947 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1219 03:04:50.600225  569947 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1219 03:04:50.601759  569947 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-208281" does not appear in /home/jenkins/minikube-integration/22230-253859/kubeconfig
	I1219 03:04:50.605721  569947 kubeconfig.go:62] /home/jenkins/minikube-integration/22230-253859/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-208281" cluster setting kubeconfig missing "no-preload-208281" context setting]
	I1219 03:04:50.610686  569947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-253859/kubeconfig: {Name:mk882428f840659847d4e22cffee2d7775067610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:04:50.613392  569947 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1219 03:04:50.647032  569947 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1219 03:04:50.647196  569947 kubeadm.go:602] duration metric: took 70.481994ms to restartPrimaryControlPlane
	I1219 03:04:50.647478  569947 kubeadm.go:403] duration metric: took 324.224528ms to StartCluster
	I1219 03:04:50.647573  569947 settings.go:142] acquiring lock: {Name:mkabb1ebf75b28a37c7b2b053110889b555ff453 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:04:50.647991  569947 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-253859/kubeconfig
	I1219 03:04:50.652215  569947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-253859/kubeconfig: {Name:mk882428f840659847d4e22cffee2d7775067610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:04:50.652837  569947 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:04:50.652966  569947 addons.go:70] Setting storage-provisioner=true in profile "no-preload-208281"
	I1219 03:04:50.652984  569947 addons.go:239] Setting addon storage-provisioner=true in "no-preload-208281"
	W1219 03:04:50.652993  569947 addons.go:248] addon storage-provisioner should already be in state true
	I1219 03:04:50.653027  569947 host.go:66] Checking if "no-preload-208281" exists ...
	I1219 03:04:50.653048  569947 config.go:182] Loaded profile config "no-preload-208281": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1219 03:04:50.653120  569947 addons.go:70] Setting default-storageclass=true in profile "no-preload-208281"
	I1219 03:04:50.653135  569947 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-208281"
	I1219 03:04:50.653460  569947 cli_runner.go:164] Run: docker container inspect no-preload-208281 --format={{.State.Status}}
	I1219 03:04:50.653534  569947 cli_runner.go:164] Run: docker container inspect no-preload-208281 --format={{.State.Status}}
	I1219 03:04:50.655588  569947 addons.go:70] Setting metrics-server=true in profile "no-preload-208281"
	I1219 03:04:50.655611  569947 addons.go:239] Setting addon metrics-server=true in "no-preload-208281"
	W1219 03:04:50.655621  569947 addons.go:248] addon metrics-server should already be in state true
	I1219 03:04:50.655656  569947 host.go:66] Checking if "no-preload-208281" exists ...
	I1219 03:04:50.656118  569947 cli_runner.go:164] Run: docker container inspect no-preload-208281 --format={{.State.Status}}
	I1219 03:04:50.656525  569947 addons.go:70] Setting dashboard=true in profile "no-preload-208281"
	I1219 03:04:50.656563  569947 addons.go:239] Setting addon dashboard=true in "no-preload-208281"
	W1219 03:04:50.656574  569947 addons.go:248] addon dashboard should already be in state true
	I1219 03:04:50.656622  569947 host.go:66] Checking if "no-preload-208281" exists ...
	I1219 03:04:50.657316  569947 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1219 03:04:50.657617  569947 cli_runner.go:164] Run: docker container inspect no-preload-208281 --format={{.State.Status}}
	I1219 03:04:50.660722  569947 out.go:179] * Verifying Kubernetes components...
	I1219 03:04:50.661854  569947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:04:50.707508  569947 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1219 03:04:50.708775  569947 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1219 03:04:50.708812  569947 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1219 03:04:50.708834  569947 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:04:50.495202  573699 config.go:182] Loaded profile config "default-k8s-diff-port-103644": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 03:04:50.495941  573699 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 03:04:50.539840  573699 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 03:04:50.540119  573699 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:04:50.710990  573699 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-19 03:04:50.671412726 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:04:50.711217  573699 docker.go:319] overlay module found
	I1219 03:04:50.713697  573699 out.go:179] * Using the docker driver based on existing profile
	I1219 03:04:50.714949  573699 start.go:309] selected driver: docker
	I1219 03:04:50.714970  573699 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-103644 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-103644 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:04:50.715089  573699 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 03:04:50.716020  573699 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:04:50.884011  573699 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-19 03:04:50.859280212 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:04:50.884478  573699 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:04:50.884531  573699 cni.go:84] Creating CNI manager for ""
	I1219 03:04:50.884789  573699 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1219 03:04:50.884940  573699 start.go:353] cluster config:
	{Name:default-k8s-diff-port-103644 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-103644 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:04:50.887403  573699 out.go:179] * Starting "default-k8s-diff-port-103644" primary control-plane node in "default-k8s-diff-port-103644" cluster
	I1219 03:04:50.888689  573699 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1219 03:04:50.889896  573699 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1219 03:04:50.891030  573699 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1219 03:04:50.891092  573699 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-253859/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-amd64.tar.lz4
	I1219 03:04:50.891106  573699 cache.go:65] Caching tarball of preloaded images
	I1219 03:04:50.891194  573699 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1219 03:04:50.891211  573699 preload.go:238] Found /home/jenkins/minikube-integration/22230-253859/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1219 03:04:50.891221  573699 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on containerd
	I1219 03:04:50.891356  573699 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/config.json ...
	I1219 03:04:50.932991  573699 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1219 03:04:50.933024  573699 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1219 03:04:50.933040  573699 cache.go:243] Successfully downloaded all kic artifacts
	I1219 03:04:50.933079  573699 start.go:360] acquireMachinesLock for default-k8s-diff-port-103644: {Name:mk39933c40de3c92aeeb6b9d20d3c90e6af0f1fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:04:50.933158  573699 start.go:364] duration metric: took 48.804µs to acquireMachinesLock for "default-k8s-diff-port-103644"
	I1219 03:04:50.933177  573699 start.go:96] Skipping create...Using existing machine configuration
	I1219 03:04:50.933183  573699 fix.go:54] fixHost starting: 
	I1219 03:04:50.933489  573699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103644 --format={{.State.Status}}
	I1219 03:04:50.973427  573699 fix.go:112] recreateIfNeeded on default-k8s-diff-port-103644: state=Stopped err=<nil>
	W1219 03:04:50.973619  573699 fix.go:138] unexpected machine state, will restart: <nil>
	I1219 03:04:50.748260  566718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (4.195228143s)
	I1219 03:04:50.748361  566718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:04:51.828106  566718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml: (1.079706419s)
	I1219 03:04:51.828277  566718 addons.go:500] Verifying addon dashboard=true in "old-k8s-version-002036"
	I1219 03:04:51.828773  566718 cli_runner.go:164] Run: docker container inspect old-k8s-version-002036 --format={{.State.Status}}
	I1219 03:04:51.856291  566718 out.go:179] * Verifying dashboard addon...
	I1219 03:04:50.708886  569947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208281
	I1219 03:04:50.709108  569947 addons.go:239] Setting addon default-storageclass=true in "no-preload-208281"
	W1219 03:04:50.709132  569947 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:04:50.709161  569947 host.go:66] Checking if "no-preload-208281" exists ...
	I1219 03:04:50.709725  569947 cli_runner.go:164] Run: docker container inspect no-preload-208281 --format={{.State.Status}}
	I1219 03:04:50.710101  569947 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:04:50.710123  569947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:04:50.710173  569947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208281
	I1219 03:04:50.716696  569947 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:04:50.716718  569947 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:04:50.716777  569947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208281
	I1219 03:04:50.770714  569947 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:04:50.770743  569947 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:04:50.770811  569947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208281
	I1219 03:04:50.772323  569947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/no-preload-208281/id_rsa Username:docker}
	I1219 03:04:50.774548  569947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/no-preload-208281/id_rsa Username:docker}
	I1219 03:04:50.782771  569947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/no-preload-208281/id_rsa Username:docker}
	I1219 03:04:50.818125  569947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/no-preload-208281/id_rsa Username:docker}
	I1219 03:04:50.922492  569947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:04:50.961986  569947 node_ready.go:35] waiting up to 6m0s for node "no-preload-208281" to be "Ready" ...
	I1219 03:04:50.964889  569947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:04:50.991305  569947 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1219 03:04:50.991337  569947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1219 03:04:50.997863  569947 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:04:51.029470  569947 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1219 03:04:51.029507  569947 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1219 03:04:51.077218  569947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:04:51.083520  569947 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:04:51.083552  569947 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1219 03:04:51.107276  569947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:04:52.474618  569947 node_ready.go:49] node "no-preload-208281" is "Ready"
	I1219 03:04:52.474662  569947 node_ready.go:38] duration metric: took 1.512481187s for node "no-preload-208281" to be "Ready" ...
	I1219 03:04:52.474682  569947 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:04:52.474743  569947 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:04:51.142743  568301 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.559306992s)
	I1219 03:04:51.142940  568301 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.499593696s)
	I1219 03:04:51.143060  568301 addons.go:500] Verifying addon metrics-server=true in "embed-certs-536489"
	I1219 03:04:51.143722  568301 cli_runner.go:164] Run: docker container inspect embed-certs-536489 --format={{.State.Status}}
	I1219 03:04:51.144038  568301 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.580066034s)
	I1219 03:04:52.990446  568301 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.445475643s)
	I1219 03:04:52.990490  568301 api_server.go:72] duration metric: took 5.685402741s to wait for apiserver process to appear ...
	I1219 03:04:52.990498  568301 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:04:52.990528  568301 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1219 03:04:52.992275  568301 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (4.373532841s)
	I1219 03:04:52.992364  568301 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:04:53.002104  568301 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1219 03:04:53.006331  568301 api_server.go:141] control plane version: v1.34.3
	I1219 03:04:53.006385  568301 api_server.go:131] duration metric: took 15.878835ms to wait for apiserver health ...
	I1219 03:04:53.006399  568301 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:04:53.016977  568301 system_pods.go:59] 9 kube-system pods found
	I1219 03:04:53.017141  568301 system_pods.go:61] "coredns-66bc5c9577-qmb9z" [dd0dceb8-d48d-4215-82f5-df001a8ffe5f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:04:53.017157  568301 system_pods.go:61] "etcd-embed-certs-536489" [b3cbe090-1470-477e-87da-d93ca2bf3394] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:04:53.017165  568301 system_pods.go:61] "kindnet-kzlhv" [2a4d0c65-8aff-4b2f-bb3d-d79b89f560ca] Running
	I1219 03:04:53.017184  568301 system_pods.go:61] "kube-apiserver-embed-certs-536489" [18c7bfaa-73a6-457a-9a58-05d2ffa0de1c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:04:53.017193  568301 system_pods.go:61] "kube-controller-manager-embed-certs-536489" [d657289e-8fd1-4ed3-94c2-194aa95545f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:04:53.017199  568301 system_pods.go:61] "kube-proxy-qhlhx" [bc7f26c2-aed8-4540-bd1f-0ee0b1974137] Running
	I1219 03:04:53.017212  568301 system_pods.go:61] "kube-scheduler-embed-certs-536489" [72b72681-cda6-48b6-9f43-9c9b125883b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:04:53.017219  568301 system_pods.go:61] "metrics-server-746fcd58dc-8458x" [47114157-df98-40be-815f-7437499ca215] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:04:53.017225  568301 system_pods.go:61] "storage-provisioner" [51c90b41-88a3-4279-84d8-13a52b7ef246] Running
	I1219 03:04:53.017233  568301 system_pods.go:74] duration metric: took 10.826754ms to wait for pod list to return data ...
	I1219 03:04:53.017244  568301 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:04:53.020879  568301 default_sa.go:45] found service account: "default"
	I1219 03:04:53.020911  568301 default_sa.go:55] duration metric: took 3.659738ms for default service account to be created ...
	I1219 03:04:53.020925  568301 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:04:53.118092  568301 system_pods.go:86] 9 kube-system pods found
	I1219 03:04:53.118237  568301 system_pods.go:89] "coredns-66bc5c9577-qmb9z" [dd0dceb8-d48d-4215-82f5-df001a8ffe5f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:04:53.118277  568301 system_pods.go:89] "etcd-embed-certs-536489" [b3cbe090-1470-477e-87da-d93ca2bf3394] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:04:53.118286  568301 system_pods.go:89] "kindnet-kzlhv" [2a4d0c65-8aff-4b2f-bb3d-d79b89f560ca] Running
	I1219 03:04:53.118334  568301 system_pods.go:89] "kube-apiserver-embed-certs-536489" [18c7bfaa-73a6-457a-9a58-05d2ffa0de1c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:04:53.118346  568301 system_pods.go:89] "kube-controller-manager-embed-certs-536489" [d657289e-8fd1-4ed3-94c2-194aa95545f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:04:53.118360  568301 system_pods.go:89] "kube-proxy-qhlhx" [bc7f26c2-aed8-4540-bd1f-0ee0b1974137] Running
	I1219 03:04:53.118368  568301 system_pods.go:89] "kube-scheduler-embed-certs-536489" [72b72681-cda6-48b6-9f43-9c9b125883b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:04:53.118508  568301 system_pods.go:89] "metrics-server-746fcd58dc-8458x" [47114157-df98-40be-815f-7437499ca215] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:04:53.118523  568301 system_pods.go:89] "storage-provisioner" [51c90b41-88a3-4279-84d8-13a52b7ef246] Running
	I1219 03:04:53.118535  568301 system_pods.go:126] duration metric: took 97.602528ms to wait for k8s-apps to be running ...
	I1219 03:04:53.118546  568301 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:04:53.118629  568301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:04:53.213539  568301 addons.go:500] Verifying addon dashboard=true in "embed-certs-536489"
	I1219 03:04:53.213985  568301 cli_runner.go:164] Run: docker container inspect embed-certs-536489 --format={{.State.Status}}
	I1219 03:04:53.214117  568301 system_svc.go:56] duration metric: took 95.561896ms WaitForService to wait for kubelet
	I1219 03:04:53.214162  568301 kubeadm.go:587] duration metric: took 5.909072172s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:04:53.214187  568301 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:04:53.220086  568301 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:04:53.220122  568301 node_conditions.go:123] node cpu capacity is 8
	I1219 03:04:53.220143  568301 node_conditions.go:105] duration metric: took 5.94983ms to run NodePressure ...
	I1219 03:04:53.220159  568301 start.go:242] waiting for startup goroutines ...
	I1219 03:04:53.239792  568301 out.go:179] * Verifying dashboard addon...
	I1219 03:04:51.859124  566718 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:04:51.862362  566718 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:04:53.241980  568301 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:04:53.245176  568301 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:04:53.747449  568301 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:04:53.747476  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:54.245867  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:54.747323  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:50.976005  573699 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-103644" ...
	I1219 03:04:50.976124  573699 cli_runner.go:164] Run: docker start default-k8s-diff-port-103644
	I1219 03:04:51.482862  573699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103644 --format={{.State.Status}}
	I1219 03:04:51.514418  573699 kic.go:430] container "default-k8s-diff-port-103644" state is running.
	I1219 03:04:51.515091  573699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-103644
	I1219 03:04:51.545304  573699 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/config.json ...
	I1219 03:04:51.545913  573699 machine.go:94] provisionDockerMachine start ...
	I1219 03:04:51.546012  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:51.578064  573699 main.go:144] libmachine: Using SSH client type: native
	I1219 03:04:51.578471  573699 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1219 03:04:51.578526  573699 main.go:144] libmachine: About to run SSH command:
	hostname
	I1219 03:04:51.580615  573699 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46348->127.0.0.1:33098: read: connection reset by peer
	I1219 03:04:54.740022  573699 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-103644
	
	I1219 03:04:54.740053  573699 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-103644"
	I1219 03:04:54.740121  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:54.764557  573699 main.go:144] libmachine: Using SSH client type: native
	I1219 03:04:54.764812  573699 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1219 03:04:54.764832  573699 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-103644 && echo "default-k8s-diff-port-103644" | sudo tee /etc/hostname
	I1219 03:04:54.940991  573699 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-103644
	
	I1219 03:04:54.941090  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:54.961163  573699 main.go:144] libmachine: Using SSH client type: native
	I1219 03:04:54.961447  573699 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1219 03:04:54.961472  573699 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-103644' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-103644/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-103644' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 03:04:55.112211  573699 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:04:55.112238  573699 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22230-253859/.minikube CaCertPath:/home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22230-253859/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22230-253859/.minikube}
	I1219 03:04:55.112272  573699 ubuntu.go:190] setting up certificates
	I1219 03:04:55.112285  573699 provision.go:84] configureAuth start
	I1219 03:04:55.112354  573699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-103644
	I1219 03:04:55.131633  573699 provision.go:143] copyHostCerts
	I1219 03:04:55.131701  573699 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-253859/.minikube/ca.pem, removing ...
	I1219 03:04:55.131722  573699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-253859/.minikube/ca.pem
	I1219 03:04:55.131814  573699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22230-253859/.minikube/ca.pem (1078 bytes)
	I1219 03:04:55.131992  573699 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-253859/.minikube/cert.pem, removing ...
	I1219 03:04:55.132009  573699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-253859/.minikube/cert.pem
	I1219 03:04:55.132066  573699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22230-253859/.minikube/cert.pem (1123 bytes)
	I1219 03:04:55.132178  573699 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-253859/.minikube/key.pem, removing ...
	I1219 03:04:55.132189  573699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-253859/.minikube/key.pem
	I1219 03:04:55.132230  573699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22230-253859/.minikube/key.pem (1675 bytes)
	I1219 03:04:55.132339  573699 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22230-253859/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-103644 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-103644 localhost minikube]
	I1219 03:04:55.201421  573699 provision.go:177] copyRemoteCerts
	I1219 03:04:55.201486  573699 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 03:04:55.201545  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:55.220254  573699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/default-k8s-diff-port-103644/id_rsa Username:docker}
	I1219 03:04:55.324809  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1219 03:04:55.344299  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1219 03:04:55.364633  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1219 03:04:55.383945  573699 provision.go:87] duration metric: took 271.644189ms to configureAuth
	I1219 03:04:55.383975  573699 ubuntu.go:206] setting minikube options for container-runtime
	I1219 03:04:55.384174  573699 config.go:182] Loaded profile config "default-k8s-diff-port-103644": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 03:04:55.384190  573699 machine.go:97] duration metric: took 3.838258422s to provisionDockerMachine
	I1219 03:04:55.384201  573699 start.go:293] postStartSetup for "default-k8s-diff-port-103644" (driver="docker")
	I1219 03:04:55.384218  573699 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 03:04:55.384292  573699 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 03:04:55.384363  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:55.402689  573699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/default-k8s-diff-port-103644/id_rsa Username:docker}
	I1219 03:04:55.509385  573699 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 03:04:55.513698  573699 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1219 03:04:55.513738  573699 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1219 03:04:55.513752  573699 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-253859/.minikube/addons for local assets ...
	I1219 03:04:55.513809  573699 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-253859/.minikube/files for local assets ...
	I1219 03:04:55.513923  573699 filesync.go:149] local asset: /home/jenkins/minikube-integration/22230-253859/.minikube/files/etc/ssl/certs/2574932.pem -> 2574932.pem in /etc/ssl/certs
	I1219 03:04:55.514061  573699 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 03:04:55.522610  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/files/etc/ssl/certs/2574932.pem --> /etc/ssl/certs/2574932.pem (1708 bytes)
	I1219 03:04:55.542136  573699 start.go:296] duration metric: took 157.911131ms for postStartSetup
	I1219 03:04:55.542235  573699 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 03:04:55.542278  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:55.560317  573699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/default-k8s-diff-port-103644/id_rsa Username:docker}
	I1219 03:04:55.676892  573699 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1219 03:04:55.683207  573699 fix.go:56] duration metric: took 4.75001221s for fixHost
	I1219 03:04:55.683240  573699 start.go:83] releasing machines lock for "default-k8s-diff-port-103644", held for 4.750073001s
	I1219 03:04:55.683337  573699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-103644
	I1219 03:04:55.706632  573699 ssh_runner.go:195] Run: cat /version.json
	I1219 03:04:55.706696  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:55.706708  573699 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 03:04:55.706796  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:55.729248  573699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/default-k8s-diff-port-103644/id_rsa Username:docker}
	I1219 03:04:55.729555  573699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/default-k8s-diff-port-103644/id_rsa Username:docker}
	I1219 03:04:55.832375  573699 ssh_runner.go:195] Run: systemctl --version
	I1219 03:04:55.888761  573699 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 03:04:55.894089  573699 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 03:04:55.894170  573699 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 03:04:55.902973  573699 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1219 03:04:55.903001  573699 start.go:496] detecting cgroup driver to use...
	I1219 03:04:55.903039  573699 detect.go:190] detected "systemd" cgroup driver on host os
	I1219 03:04:55.903123  573699 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1219 03:04:55.924413  573699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1219 03:04:55.939247  573699 docker.go:218] disabling cri-docker service (if available) ...
	I1219 03:04:55.939312  573699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1219 03:04:55.955848  573699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1219 03:04:55.970636  573699 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1219 03:04:56.060548  573699 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1219 03:04:56.151469  573699 docker.go:234] disabling docker service ...
	I1219 03:04:56.151544  573699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1219 03:04:56.168733  573699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1219 03:04:56.183785  573699 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1219 03:04:56.269923  573699 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1219 03:04:56.358410  573699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1219 03:04:56.374184  573699 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 03:04:56.391509  573699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1219 03:04:56.403885  573699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1219 03:04:56.418704  573699 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1219 03:04:56.418843  573699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1219 03:04:56.432502  573699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1219 03:04:56.446280  573699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1219 03:04:56.458732  573699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1219 03:04:56.471691  573699 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 03:04:56.482737  573699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1219 03:04:56.494667  573699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1219 03:04:56.507284  573699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1219 03:04:56.520174  573699 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 03:04:56.530768  573699 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 03:04:56.541170  573699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:04:56.646657  573699 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1219 03:04:56.781992  573699 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1219 03:04:56.782112  573699 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1219 03:04:56.788198  573699 start.go:564] Will wait 60s for crictl version
	I1219 03:04:56.788285  573699 ssh_runner.go:195] Run: which crictl
	I1219 03:04:56.793113  573699 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1219 03:04:56.836402  573699 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1219 03:04:56.836474  573699 ssh_runner.go:195] Run: containerd --version
	I1219 03:04:56.864133  573699 ssh_runner.go:195] Run: containerd --version
	I1219 03:04:56.898122  573699 out.go:179] * Preparing Kubernetes v1.34.3 on containerd 2.2.0 ...
	I1219 03:04:53.197683  569947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.23269288s)
	I1219 03:04:53.197756  569947 ssh_runner.go:235] Completed: test -f /usr/bin/helm: (2.199861038s)
	I1219 03:04:53.197848  569947 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:04:53.197862  569947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.120620602s)
	I1219 03:04:53.198058  569947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.09074876s)
	I1219 03:04:53.198096  569947 addons.go:500] Verifying addon metrics-server=true in "no-preload-208281"
	I1219 03:04:53.198179  569947 api_server.go:72] duration metric: took 2.540661776s to wait for apiserver process to appear ...
	I1219 03:04:53.198202  569947 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:04:53.198229  569947 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1219 03:04:53.198445  569947 cli_runner.go:164] Run: docker container inspect no-preload-208281 --format={{.State.Status}}
	I1219 03:04:53.205510  569947 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:04:53.205637  569947 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:04:53.205671  569947 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:04:53.698608  569947 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1219 03:04:53.705658  569947 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:04:53.705697  569947 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:04:54.198361  569947 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1219 03:04:54.202897  569947 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1219 03:04:54.204079  569947 api_server.go:141] control plane version: v1.35.0-rc.1
	I1219 03:04:54.204114  569947 api_server.go:131] duration metric: took 1.005903946s to wait for apiserver health ...
	I1219 03:04:54.204127  569947 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:04:54.208336  569947 system_pods.go:59] 9 kube-system pods found
	I1219 03:04:54.208377  569947 system_pods.go:61] "coredns-7d764666f9-hm5hz" [59441d91-a2b7-4d87-86d1-5ccaaec4e398] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:04:54.208389  569947 system_pods.go:61] "etcd-no-preload-208281" [edfe3a0f-95b1-49ee-8843-456255c2c573] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:04:54.208403  569947 system_pods.go:61] "kindnet-zbmbl" [e7d80d3e-7bf1-4e49-b7f9-c0911bbae20d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1219 03:04:54.208424  569947 system_pods.go:61] "kube-apiserver-no-preload-208281" [cee547f9-b6ae-4654-b92b-5cd3c5caae01] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:04:54.208437  569947 system_pods.go:61] "kube-controller-manager-no-preload-208281" [ed375fa0-c03b-42d4-9887-cbe64ed19aeb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:04:54.208444  569947 system_pods.go:61] "kube-proxy-xst8w" [24d16e46-3e1f-4d38-a486-8f15642946c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:04:54.208460  569947 system_pods.go:61] "kube-scheduler-no-preload-208281" [65c63f44-2615-47ca-9323-d80a812af086] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:04:54.208472  569947 system_pods.go:61] "metrics-server-5d785b57d4-zgcxz" [743fe6aa-308c-4f80-b7f5-c753be058b69] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:04:54.208477  569947 system_pods.go:61] "storage-provisioner" [5bab6e7d-150b-4c8e-ab0a-933ec046c863] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:04:54.208488  569947 system_pods.go:74] duration metric: took 4.352835ms to wait for pod list to return data ...
	I1219 03:04:54.208503  569947 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:04:54.211346  569947 default_sa.go:45] found service account: "default"
	I1219 03:04:54.211373  569947 default_sa.go:55] duration metric: took 2.86243ms for default service account to be created ...
	I1219 03:04:54.211385  569947 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:04:54.214301  569947 system_pods.go:86] 9 kube-system pods found
	I1219 03:04:54.214337  569947 system_pods.go:89] "coredns-7d764666f9-hm5hz" [59441d91-a2b7-4d87-86d1-5ccaaec4e398] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:04:54.214347  569947 system_pods.go:89] "etcd-no-preload-208281" [edfe3a0f-95b1-49ee-8843-456255c2c573] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:04:54.214360  569947 system_pods.go:89] "kindnet-zbmbl" [e7d80d3e-7bf1-4e49-b7f9-c0911bbae20d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1219 03:04:54.214369  569947 system_pods.go:89] "kube-apiserver-no-preload-208281" [cee547f9-b6ae-4654-b92b-5cd3c5caae01] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:04:54.214377  569947 system_pods.go:89] "kube-controller-manager-no-preload-208281" [ed375fa0-c03b-42d4-9887-cbe64ed19aeb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:04:54.214386  569947 system_pods.go:89] "kube-proxy-xst8w" [24d16e46-3e1f-4d38-a486-8f15642946c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:04:54.214402  569947 system_pods.go:89] "kube-scheduler-no-preload-208281" [65c63f44-2615-47ca-9323-d80a812af086] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:04:54.214411  569947 system_pods.go:89] "metrics-server-5d785b57d4-zgcxz" [743fe6aa-308c-4f80-b7f5-c753be058b69] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:04:54.214421  569947 system_pods.go:89] "storage-provisioner" [5bab6e7d-150b-4c8e-ab0a-933ec046c863] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:04:54.214431  569947 system_pods.go:126] duration metric: took 3.039478ms to wait for k8s-apps to be running ...
	I1219 03:04:54.214443  569947 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:04:54.214504  569947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:04:54.371132  569947 ssh_runner.go:235] Completed: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh": (1.165499888s)
	I1219 03:04:54.371186  569947 system_svc.go:56] duration metric: took 156.734958ms WaitForService to wait for kubelet
	I1219 03:04:54.371215  569947 kubeadm.go:587] duration metric: took 3.713723941s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:04:54.371244  569947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:04:54.371246  569947 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:04:54.374625  569947 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:04:54.374660  569947 node_conditions.go:123] node cpu capacity is 8
	I1219 03:04:54.374679  569947 node_conditions.go:105] duration metric: took 3.423654ms to run NodePressure ...
	I1219 03:04:54.374695  569947 start.go:242] waiting for startup goroutines ...
	I1219 03:04:57.635651  569947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.264367144s)
	I1219 03:04:57.635887  569947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:04:57.949184  569947 addons.go:500] Verifying addon dashboard=true in "no-preload-208281"
	I1219 03:04:57.949557  569947 cli_runner.go:164] Run: docker container inspect no-preload-208281 --format={{.State.Status}}
	I1219 03:04:57.976511  569947 out.go:179] * Verifying dashboard addon...
	I1219 03:04:56.899304  573699 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-103644 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1219 03:04:56.919626  573699 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1219 03:04:56.924517  573699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:04:56.937946  573699 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-103644 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-103644 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 03:04:56.938108  573699 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1219 03:04:56.938182  573699 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:04:56.968240  573699 containerd.go:627] all images are preloaded for containerd runtime.
	I1219 03:04:56.968267  573699 containerd.go:534] Images already preloaded, skipping extraction
	I1219 03:04:56.968327  573699 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:04:56.997359  573699 containerd.go:627] all images are preloaded for containerd runtime.
	I1219 03:04:56.997383  573699 cache_images.go:86] Images are preloaded, skipping loading
	I1219 03:04:56.997392  573699 kubeadm.go:935] updating node { 192.168.94.2 8444 v1.34.3 containerd true true} ...
	I1219 03:04:56.997515  573699 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-103644 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-103644 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 03:04:56.997591  573699 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1219 03:04:57.033726  573699 cni.go:84] Creating CNI manager for ""
	I1219 03:04:57.033760  573699 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1219 03:04:57.033788  573699 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1219 03:04:57.033818  573699 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-103644 NodeName:default-k8s-diff-port-103644 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 03:04:57.034013  573699 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-103644"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 03:04:57.034110  573699 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1219 03:04:57.054291  573699 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 03:04:57.054366  573699 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 03:04:57.069183  573699 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I1219 03:04:57.092986  573699 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1219 03:04:57.114537  573699 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2240 bytes)
	I1219 03:04:57.135768  573699 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1219 03:04:57.141830  573699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:04:57.157200  573699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:04:57.285296  573699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:04:57.321401  573699 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644 for IP: 192.168.94.2
	I1219 03:04:57.321425  573699 certs.go:195] generating shared ca certs ...
	I1219 03:04:57.321445  573699 certs.go:227] acquiring lock for ca certs: {Name:mk50e31410087b5c6cdb0986368a8c9100618403 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:04:57.321651  573699 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-253859/.minikube/ca.key
	I1219 03:04:57.321728  573699 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-253859/.minikube/proxy-client-ca.key
	I1219 03:04:57.321741  573699 certs.go:257] generating profile certs ...
	I1219 03:04:57.321895  573699 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/client.key
	I1219 03:04:57.321969  573699 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/apiserver.key.eac4724a
	I1219 03:04:57.322032  573699 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/proxy-client.key
	I1219 03:04:57.322452  573699 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/257493.pem (1338 bytes)
	W1219 03:04:57.322563  573699 certs.go:480] ignoring /home/jenkins/minikube-integration/22230-253859/.minikube/certs/257493_empty.pem, impossibly tiny 0 bytes
	I1219 03:04:57.322947  573699 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca-key.pem (1679 bytes)
	I1219 03:04:57.323038  573699 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca.pem (1078 bytes)
	I1219 03:04:57.323130  573699 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/cert.pem (1123 bytes)
	I1219 03:04:57.323212  573699 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/key.pem (1675 bytes)
	I1219 03:04:57.323310  573699 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/files/etc/ssl/certs/2574932.pem (1708 bytes)
	I1219 03:04:57.324261  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 03:04:57.367430  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 03:04:57.395772  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 03:04:57.447975  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1219 03:04:57.485724  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1219 03:04:57.550160  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1219 03:04:57.586359  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 03:04:57.650368  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1219 03:04:57.705528  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/certs/257493.pem --> /usr/share/ca-certificates/257493.pem (1338 bytes)
	I1219 03:04:57.753827  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/files/etc/ssl/certs/2574932.pem --> /usr/share/ca-certificates/2574932.pem (1708 bytes)
	I1219 03:04:57.796129  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 03:04:57.846633  573699 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 03:04:57.874041  573699 ssh_runner.go:195] Run: openssl version
	I1219 03:04:57.883186  573699 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:04:57.893276  573699 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 03:04:57.903322  573699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:04:57.908713  573699 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:26 /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:04:57.908788  573699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:04:57.959424  573699 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 03:04:57.975955  573699 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/257493.pem
	I1219 03:04:57.987406  573699 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/257493.pem /etc/ssl/certs/257493.pem
	I1219 03:04:57.999924  573699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/257493.pem
	I1219 03:04:58.007017  573699 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 19 02:34 /usr/share/ca-certificates/257493.pem
	I1219 03:04:58.007094  573699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/257493.pem
	I1219 03:04:58.066450  573699 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1219 03:04:58.084889  573699 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2574932.pem
	I1219 03:04:58.104839  573699 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2574932.pem /etc/ssl/certs/2574932.pem
	I1219 03:04:58.121039  573699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2574932.pem
	I1219 03:04:58.128831  573699 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 19 02:34 /usr/share/ca-certificates/2574932.pem
	I1219 03:04:58.128908  573699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2574932.pem
	I1219 03:04:58.238719  573699 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1219 03:04:58.257473  573699 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 03:04:58.269077  573699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1219 03:04:58.373050  573699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1219 03:04:58.472122  573699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1219 03:04:58.523474  573699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1219 03:04:58.567812  573699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1219 03:04:58.624150  573699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1219 03:04:58.663023  573699 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-103644 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-103644 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:04:58.663147  573699 cri.go:57] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1219 03:04:58.663225  573699 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:04:58.698055  573699 cri.go:92] found id: "19baa8a9717c29181666ffb3efdceea762fabc94b416863c5b53dc5a31a91c9c"
	I1219 03:04:58.698124  573699 cri.go:92] found id: "c2591b42ec56d4e729eb5050bae8a11599a1d848e2b407c779bd43019d38acd7"
	I1219 03:04:58.698150  573699 cri.go:92] found id: "a8858dc4fe6cf1222bb421499d0aaa734e7a15be6fd8f63ebd4a4f91de0515c1"
	I1219 03:04:58.698161  573699 cri.go:92] found id: "fc945986f0d8b51799666a6eb31fb812b90a17842cc95aa6d352ef48b62f0652"
	I1219 03:04:58.698166  573699 cri.go:92] found id: "36e5d694c8907189486901b0aad40fae056b856f62180e718acb50ce029ecd0d"
	I1219 03:04:58.698176  573699 cri.go:92] found id: "ed906de27de9c3783be2432f68b3e79b562b368da4fe5ddde333748fe58c2534"
	I1219 03:04:58.698180  573699 cri.go:92] found id: "72384f1ad49d78ec6df6f3d6d752884b9f63349eb0cf50bb752be26fdba3141d"
	I1219 03:04:58.698185  573699 cri.go:92] found id: "872846ec96d2d949d4a7f1a644718c7d0f80e0e28fcde0d7425648a2ffc89358"
	I1219 03:04:58.698189  573699 cri.go:92] found id: "dd57b66fad064bb42b78fb88e0736406dd552174022391793749a16c31b46525"
	I1219 03:04:58.698201  573699 cri.go:92] found id: "ee8c252f3d8f4aee54318214731e5386b3c089ad31c19108f2f01301f0698503"
	I1219 03:04:58.698208  573699 cri.go:92] found id: "069eca43bbcc0eb20a0e387dc92839a2dc811dd9acdf65cf2e9fe7389f32d3cd"
	I1219 03:04:58.698212  573699 cri.go:92] found id: "49ae9ae9664179746d1cc4f0b2904783aa2c4b1e268ce918bf8eb4eec3c61233"
	I1219 03:04:58.698216  573699 cri.go:92] found id: ""
	I1219 03:04:58.698271  573699 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1219 03:04:58.725948  573699 cri.go:119] JSON = [{"ociVersion":"1.2.1","id":"0074e46caf6e41a18e3d0f5443787c1bc06e1d75e37fd459fd30404491fd7537","pid":862,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0074e46caf6e41a18e3d0f5443787c1bc06e1d75e37fd459fd30404491fd7537","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0074e46caf6e41a18e3d0f5443787c1bc06e1d75e37fd459fd30404491fd7537/rootfs","created":"2025-12-19T03:04:58.065318041Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"0074e46caf6e41a18e3d0f5443787c1bc06e1d75e37fd459fd30404491fd7537","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-default-k8s-diff-port-103644_50f4d1ce4fca33a4531f882f5fb97a4e","io.kubernetes.cri.sa
ndbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-default-k8s-diff-port-103644","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"50f4d1ce4fca33a4531f882f5fb97a4e"},"owner":"root"},{"ociVersion":"1.2.1","id":"19baa8a9717c29181666ffb3efdceea762fabc94b416863c5b53dc5a31a91c9c","pid":981,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/19baa8a9717c29181666ffb3efdceea762fabc94b416863c5b53dc5a31a91c9c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/19baa8a9717c29181666ffb3efdceea762fabc94b416863c5b53dc5a31a91c9c/rootfs","created":"2025-12-19T03:04:58.375811399Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.34.3","io.kubernetes.cri.sandbox-id":"6f1ccb334eeac83f482c9566bcf07d1b7f0691e38f246bd1e2b18a839382f2be","io.kubernetes.cri.sandbox-name":"kube-controller-manager-
default-k8s-diff-port-103644","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"ac53bb8a0832eefbaa4a648be6aad901"},"owner":"root"},{"ociVersion":"1.2.1","id":"6f1ccb334eeac83f482c9566bcf07d1b7f0691e38f246bd1e2b18a839382f2be","pid":855,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f1ccb334eeac83f482c9566bcf07d1b7f0691e38f246bd1e2b18a839382f2be","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f1ccb334eeac83f482c9566bcf07d1b7f0691e38f246bd1e2b18a839382f2be/rootfs","created":"2025-12-19T03:04:58.067793692Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"6f1ccb334eeac83f482c9566bcf07d1b7f0691e38f246bd1e2b18a839382f2be","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube
-system_kube-controller-manager-default-k8s-diff-port-103644_ac53bb8a0832eefbaa4a648be6aad901","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-default-k8s-diff-port-103644","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"ac53bb8a0832eefbaa4a648be6aad901"},"owner":"root"},{"ociVersion":"1.2.1","id":"6fa84325f8005a8cda67fb08f0d9185af05913fd2ed0468866c0616c98bcca9f","pid":834,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6fa84325f8005a8cda67fb08f0d9185af05913fd2ed0468866c0616c98bcca9f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6fa84325f8005a8cda67fb08f0d9185af05913fd2ed0468866c0616c98bcca9f/rootfs","created":"2025-12-19T03:04:58.050783422Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernet
es.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"6fa84325f8005a8cda67fb08f0d9185af05913fd2ed0468866c0616c98bcca9f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-default-k8s-diff-port-103644_996cf4b38188d4b0d664648ad2102013","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-default-k8s-diff-port-103644","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"996cf4b38188d4b0d664648ad2102013"},"owner":"root"},{"ociVersion":"1.2.1","id":"84008f4f31b82c848a4941fccaee8d78c5d94eb07bb5b35b53ea6f10f5e460bc","pid":796,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/84008f4f31b82c848a4941fccaee8d78c5d94eb07bb5b35b53ea6f10f5e460bc","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/84008f4f31b82c848a4941fccaee8d78c5d94eb07bb5b35b53ea6f10f5e460bc/rootfs","created":"2025-12-19T03:04:58.031779484Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","
io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"84008f4f31b82c848a4941fccaee8d78c5d94eb07bb5b35b53ea6f10f5e460bc","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-default-k8s-diff-port-103644_4275d7c883d3f735b8de47264bc63415","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-default-k8s-diff-port-103644","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"4275d7c883d3f735b8de47264bc63415"},"owner":"root"},{"ociVersion":"1.2.1","id":"a8858dc4fe6cf1222bb421499d0aaa734e7a15be6fd8f63ebd4a4f91de0515c1","pid":951,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a8858dc4fe6cf1222bb421499d0aaa734e7a15be6fd8f63ebd4a4f91de0515c1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a8858dc4fe6cf1222bb4214
99d0aaa734e7a15be6fd8f63ebd4a4f91de0515c1/rootfs","created":"2025-12-19T03:04:58.294875595Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.34.3","io.kubernetes.cri.sandbox-id":"6fa84325f8005a8cda67fb08f0d9185af05913fd2ed0468866c0616c98bcca9f","io.kubernetes.cri.sandbox-name":"kube-apiserver-default-k8s-diff-port-103644","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"996cf4b38188d4b0d664648ad2102013"},"owner":"root"},{"ociVersion":"1.2.1","id":"c2591b42ec56d4e729eb5050bae8a11599a1d848e2b407c779bd43019d38acd7","pid":969,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c2591b42ec56d4e729eb5050bae8a11599a1d848e2b407c779bd43019d38acd7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c2591b42ec56d4e729eb5050bae8a11599a1d848e2b407c779bd43019d38acd7/rootfs","created":"2025-12-19T03:04:58.293243949Z","
annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.34.3","io.kubernetes.cri.sandbox-id":"0074e46caf6e41a18e3d0f5443787c1bc06e1d75e37fd459fd30404491fd7537","io.kubernetes.cri.sandbox-name":"kube-scheduler-default-k8s-diff-port-103644","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"50f4d1ce4fca33a4531f882f5fb97a4e"},"owner":"root"},{"ociVersion":"1.2.1","id":"fc945986f0d8b51799666a6eb31fb812b90a17842cc95aa6d352ef48b62f0652","pid":915,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fc945986f0d8b51799666a6eb31fb812b90a17842cc95aa6d352ef48b62f0652","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fc945986f0d8b51799666a6eb31fb812b90a17842cc95aa6d352ef48b62f0652/rootfs","created":"2025-12-19T03:04:58.225549561Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"co
ntainer","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.6.5-0","io.kubernetes.cri.sandbox-id":"84008f4f31b82c848a4941fccaee8d78c5d94eb07bb5b35b53ea6f10f5e460bc","io.kubernetes.cri.sandbox-name":"etcd-default-k8s-diff-port-103644","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"4275d7c883d3f735b8de47264bc63415"},"owner":"root"}]
	I1219 03:04:58.726160  573699 cri.go:129] list returned 8 containers
	I1219 03:04:58.726176  573699 cri.go:132] container: {ID:0074e46caf6e41a18e3d0f5443787c1bc06e1d75e37fd459fd30404491fd7537 Status:running}
	I1219 03:04:58.726215  573699 cri.go:134] skipping 0074e46caf6e41a18e3d0f5443787c1bc06e1d75e37fd459fd30404491fd7537 - not in ps
	I1219 03:04:58.726225  573699 cri.go:132] container: {ID:19baa8a9717c29181666ffb3efdceea762fabc94b416863c5b53dc5a31a91c9c Status:running}
	I1219 03:04:58.726238  573699 cri.go:138] skipping {19baa8a9717c29181666ffb3efdceea762fabc94b416863c5b53dc5a31a91c9c running}: state = "running", want "paused"
	I1219 03:04:58.726253  573699 cri.go:132] container: {ID:6f1ccb334eeac83f482c9566bcf07d1b7f0691e38f246bd1e2b18a839382f2be Status:running}
	I1219 03:04:58.726263  573699 cri.go:134] skipping 6f1ccb334eeac83f482c9566bcf07d1b7f0691e38f246bd1e2b18a839382f2be - not in ps
	I1219 03:04:58.726272  573699 cri.go:132] container: {ID:6fa84325f8005a8cda67fb08f0d9185af05913fd2ed0468866c0616c98bcca9f Status:running}
	I1219 03:04:58.726282  573699 cri.go:134] skipping 6fa84325f8005a8cda67fb08f0d9185af05913fd2ed0468866c0616c98bcca9f - not in ps
	I1219 03:04:58.726287  573699 cri.go:132] container: {ID:84008f4f31b82c848a4941fccaee8d78c5d94eb07bb5b35b53ea6f10f5e460bc Status:running}
	I1219 03:04:58.726296  573699 cri.go:134] skipping 84008f4f31b82c848a4941fccaee8d78c5d94eb07bb5b35b53ea6f10f5e460bc - not in ps
	I1219 03:04:58.726300  573699 cri.go:132] container: {ID:a8858dc4fe6cf1222bb421499d0aaa734e7a15be6fd8f63ebd4a4f91de0515c1 Status:running}
	I1219 03:04:58.726310  573699 cri.go:138] skipping {a8858dc4fe6cf1222bb421499d0aaa734e7a15be6fd8f63ebd4a4f91de0515c1 running}: state = "running", want "paused"
	I1219 03:04:58.726317  573699 cri.go:132] container: {ID:c2591b42ec56d4e729eb5050bae8a11599a1d848e2b407c779bd43019d38acd7 Status:running}
	I1219 03:04:58.726327  573699 cri.go:138] skipping {c2591b42ec56d4e729eb5050bae8a11599a1d848e2b407c779bd43019d38acd7 running}: state = "running", want "paused"
	I1219 03:04:58.726334  573699 cri.go:132] container: {ID:fc945986f0d8b51799666a6eb31fb812b90a17842cc95aa6d352ef48b62f0652 Status:running}
	I1219 03:04:58.726341  573699 cri.go:138] skipping {fc945986f0d8b51799666a6eb31fb812b90a17842cc95aa6d352ef48b62f0652 running}: state = "running", want "paused"
	I1219 03:04:58.726406  573699 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1219 03:04:58.736002  573699 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1219 03:04:58.736024  573699 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1219 03:04:58.736083  573699 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1219 03:04:58.745325  573699 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1219 03:04:58.746851  573699 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-103644" does not appear in /home/jenkins/minikube-integration/22230-253859/kubeconfig
	I1219 03:04:58.747840  573699 kubeconfig.go:62] /home/jenkins/minikube-integration/22230-253859/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-103644" cluster setting kubeconfig missing "default-k8s-diff-port-103644" context setting]
	I1219 03:04:58.749236  573699 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-253859/kubeconfig: {Name:mk882428f840659847d4e22cffee2d7775067610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:04:58.751783  573699 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1219 03:04:58.761185  573699 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1219 03:04:58.761233  573699 kubeadm.go:602] duration metric: took 25.202742ms to restartPrimaryControlPlane
	I1219 03:04:58.761245  573699 kubeadm.go:403] duration metric: took 98.23938ms to StartCluster
	I1219 03:04:58.761266  573699 settings.go:142] acquiring lock: {Name:mkabb1ebf75b28a37c7b2b053110889b555ff453 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:04:58.761344  573699 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-253859/kubeconfig
	I1219 03:04:58.763956  573699 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-253859/kubeconfig: {Name:mk882428f840659847d4e22cffee2d7775067610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:04:58.764278  573699 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1219 03:04:58.764352  573699 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:04:58.764458  573699 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-103644"
	I1219 03:04:58.764482  573699 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-103644"
	W1219 03:04:58.764491  573699 addons.go:248] addon storage-provisioner should already be in state true
	I1219 03:04:58.764498  573699 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-103644"
	I1219 03:04:58.764518  573699 config.go:182] Loaded profile config "default-k8s-diff-port-103644": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 03:04:58.764533  573699 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-103644"
	I1219 03:04:58.764530  573699 addons.go:70] Setting metrics-server=true in profile "default-k8s-diff-port-103644"
	I1219 03:04:58.764551  573699 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-103644"
	I1219 03:04:58.764557  573699 addons.go:239] Setting addon metrics-server=true in "default-k8s-diff-port-103644"
	I1219 03:04:58.764521  573699 host.go:66] Checking if "default-k8s-diff-port-103644" exists ...
	I1219 03:04:58.764523  573699 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-103644"
	W1219 03:04:58.764565  573699 addons.go:248] addon metrics-server should already be in state true
	I1219 03:04:58.764660  573699 host.go:66] Checking if "default-k8s-diff-port-103644" exists ...
	I1219 03:04:58.764898  573699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103644 --format={{.State.Status}}
	I1219 03:04:58.765067  573699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103644 --format={{.State.Status}}
	W1219 03:04:58.764563  573699 addons.go:248] addon dashboard should already be in state true
	I1219 03:04:58.765224  573699 host.go:66] Checking if "default-k8s-diff-port-103644" exists ...
	I1219 03:04:58.765244  573699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103644 --format={{.State.Status}}
	I1219 03:04:58.765778  573699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103644 --format={{.State.Status}}
	I1219 03:04:58.766439  573699 out.go:179] * Verifying Kubernetes components...
	I1219 03:04:58.769848  573699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:04:58.795158  573699 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-103644"
	W1219 03:04:58.795295  573699 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:04:58.795354  573699 host.go:66] Checking if "default-k8s-diff-port-103644" exists ...
	I1219 03:04:58.796260  573699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103644 --format={{.State.Status}}
	I1219 03:04:58.798810  573699 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1219 03:04:58.798816  573699 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:04:57.865290  566718 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:04:57.865322  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:58.373051  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:58.867408  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:59.364332  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:55.245497  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:55.746387  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:56.245217  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:56.749455  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:57.246279  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:57.748208  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:58.247627  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:58.745395  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:59.247400  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:59.747210  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:58.799225  573699 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:04:58.799247  573699 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:04:58.799304  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:58.799993  573699 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1219 03:04:58.800017  573699 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1219 03:04:58.800075  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:58.800356  573699 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:04:58.800371  573699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:04:58.800429  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:58.837919  573699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/default-k8s-diff-port-103644/id_rsa Username:docker}
	I1219 03:04:58.838753  573699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/default-k8s-diff-port-103644/id_rsa Username:docker}
	I1219 03:04:58.846681  573699 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:04:58.846725  573699 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:04:58.846799  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:58.869014  573699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/default-k8s-diff-port-103644/id_rsa Username:docker}
	I1219 03:04:58.891596  573699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/default-k8s-diff-port-103644/id_rsa Username:docker}
	I1219 03:04:58.990117  573699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:04:59.008626  573699 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:04:59.009409  573699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:04:59.016187  573699 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-103644" to be "Ready" ...
	I1219 03:04:59.016907  573699 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:04:59.044939  573699 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1219 03:04:59.044973  573699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1219 03:04:59.048120  573699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:04:59.087063  573699 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1219 03:04:59.087153  573699 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1219 03:04:59.114132  573699 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:04:59.114163  573699 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1219 03:04:59.144085  573699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:05:00.372562  573699 node_ready.go:49] node "default-k8s-diff-port-103644" is "Ready"
	I1219 03:05:00.372622  573699 node_ready.go:38] duration metric: took 1.356373278s for node "default-k8s-diff-port-103644" to be "Ready" ...
	I1219 03:05:00.372644  573699 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:05:00.372706  573699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:04:57.979521  569947 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:04:57.983495  569947 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:04:57.983523  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:58.489816  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:58.984080  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:59.484148  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:59.983915  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:00.484939  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:00.985080  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:01.486418  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:01.986557  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:02.484684  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:59.866115  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:00.365239  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:00.866184  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:01.366415  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:01.863549  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:02.364375  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:02.863998  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:03.363890  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:03.863749  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:04.382768  566718 kapi.go:107] duration metric: took 12.523639555s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	I1219 03:05:04.433515  566718 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-002036 addons enable metrics-server
	
	I1219 03:05:04.435631  566718 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1219 03:05:04.437408  566718 addons.go:546] duration metric: took 22.668379604s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I1219 03:05:04.437463  566718 start.go:247] waiting for cluster config update ...
	I1219 03:05:04.437482  566718 start.go:256] writing updated cluster config ...
	I1219 03:05:04.437853  566718 ssh_runner.go:195] Run: rm -f paused
	I1219 03:05:04.443668  566718 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:05:04.450779  566718 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-l88tx" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:00.248093  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:00.749216  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:01.247778  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:01.747890  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:02.245449  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:02.746684  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:03.247359  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:03.746557  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:04.245966  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:04.746278  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:01.448117  573699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.43867528s)
	I1219 03:05:01.448182  573699 ssh_runner.go:235] Completed: test -f /usr/local/bin/helm: (2.431240621s)
	I1219 03:05:01.448196  573699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.399991052s)
	I1219 03:05:01.448260  573699 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:05:01.448385  573699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.304270108s)
	I1219 03:05:01.448406  573699 addons.go:500] Verifying addon metrics-server=true in "default-k8s-diff-port-103644"
	I1219 03:05:01.448485  573699 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.075756393s)
	I1219 03:05:01.448520  573699 api_server.go:72] duration metric: took 2.684209271s to wait for apiserver process to appear ...
	I1219 03:05:01.448536  573699 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:05:01.448558  573699 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1219 03:05:01.448716  573699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103644 --format={{.State.Status}}
	I1219 03:05:01.458744  573699 api_server.go:279] https://192.168.94.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:05:01.458783  573699 api_server.go:103] status: https://192.168.94.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:05:01.950069  573699 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1219 03:05:01.959300  573699 api_server.go:279] https://192.168.94.2:8444/healthz returned 200:
	ok
	I1219 03:05:01.960703  573699 api_server.go:141] control plane version: v1.34.3
	I1219 03:05:01.960739  573699 api_server.go:131] duration metric: took 512.19419ms to wait for apiserver health ...
	I1219 03:05:01.960751  573699 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:05:01.965477  573699 system_pods.go:59] 9 kube-system pods found
	I1219 03:05:01.965544  573699 system_pods.go:61] "coredns-66bc5c9577-86vsf" [d2b924f3-ac71-431b-a3e6-f85f1e0b94e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:01.965560  573699 system_pods.go:61] "etcd-default-k8s-diff-port-103644" [ececfad7-09c9-4851-9fda-c468648a6e3f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:05:01.965595  573699 system_pods.go:61] "kindnet-vgs5z" [3a78062f-cab2-4e56-bc36-33ecf9505255] Running
	I1219 03:05:01.965611  573699 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-103644" [c5859d2c-4337-4b88-a46f-695c3ac4f9c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:05:01.965623  573699 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-103644" [57334df1-410d-4993-936c-c6cf1604c166] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:05:01.965631  573699 system_pods.go:61] "kube-proxy-lgw6f" [3b4461b1-0b30-427d-9e31-107cea049612] Running
	I1219 03:05:01.965640  573699 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-103644" [e44d65aa-7d39-4020-b9d6-4473f92a8f90] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:05:01.965653  573699 system_pods.go:61] "metrics-server-746fcd58dc-tctv8" [37ff7895-b382-407b-9032-56a428173579] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:05:01.965660  573699 system_pods.go:61] "storage-provisioner" [f12460c5-0196-4171-a44f-31b13af14f9f] Running
	I1219 03:05:01.965670  573699 system_pods.go:74] duration metric: took 4.91154ms to wait for pod list to return data ...
	I1219 03:05:01.965682  573699 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:05:01.969223  573699 default_sa.go:45] found service account: "default"
	I1219 03:05:01.969255  573699 default_sa.go:55] duration metric: took 3.563468ms for default service account to be created ...
	I1219 03:05:01.969269  573699 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:05:01.973647  573699 system_pods.go:86] 9 kube-system pods found
	I1219 03:05:01.973775  573699 system_pods.go:89] "coredns-66bc5c9577-86vsf" [d2b924f3-ac71-431b-a3e6-f85f1e0b94e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:01.973790  573699 system_pods.go:89] "etcd-default-k8s-diff-port-103644" [ececfad7-09c9-4851-9fda-c468648a6e3f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:05:01.973797  573699 system_pods.go:89] "kindnet-vgs5z" [3a78062f-cab2-4e56-bc36-33ecf9505255] Running
	I1219 03:05:01.973804  573699 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-103644" [c5859d2c-4337-4b88-a46f-695c3ac4f9c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:05:01.973810  573699 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-103644" [57334df1-410d-4993-936c-c6cf1604c166] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:05:01.973828  573699 system_pods.go:89] "kube-proxy-lgw6f" [3b4461b1-0b30-427d-9e31-107cea049612] Running
	I1219 03:05:01.973834  573699 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-103644" [e44d65aa-7d39-4020-b9d6-4473f92a8f90] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:05:01.973840  573699 system_pods.go:89] "metrics-server-746fcd58dc-tctv8" [37ff7895-b382-407b-9032-56a428173579] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:05:01.973843  573699 system_pods.go:89] "storage-provisioner" [f12460c5-0196-4171-a44f-31b13af14f9f] Running
	I1219 03:05:01.973852  573699 system_pods.go:126] duration metric: took 4.574679ms to wait for k8s-apps to be running ...
	I1219 03:05:01.973859  573699 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:05:01.973912  573699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:05:02.653061  573699 ssh_runner.go:235] Completed: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh": (1.204735295s)
	I1219 03:05:02.653137  573699 system_svc.go:56] duration metric: took 679.266214ms WaitForService to wait for kubelet
	I1219 03:05:02.653168  573699 kubeadm.go:587] duration metric: took 3.888855367s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:05:02.653197  573699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:05:02.653199  573699 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:05:02.656332  573699 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:05:02.656365  573699 node_conditions.go:123] node cpu capacity is 8
	I1219 03:05:02.656382  573699 node_conditions.go:105] duration metric: took 3.090983ms to run NodePressure ...
	I1219 03:05:02.656398  573699 start.go:242] waiting for startup goroutines ...
	I1219 03:05:05.900902  573699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.247656336s)
	I1219 03:05:05.901008  573699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:05:06.370072  573699 addons.go:500] Verifying addon dashboard=true in "default-k8s-diff-port-103644"
	I1219 03:05:06.370443  573699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103644 --format={{.State.Status}}
	I1219 03:05:06.413077  573699 out.go:179] * Verifying dashboard addon...
	I1219 03:05:02.984573  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:03.483377  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:03.983965  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:04.483784  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:04.983862  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:05.484412  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:05.985034  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:06.484458  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:06.983536  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:07.484546  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	W1219 03:05:06.463527  566718 pod_ready.go:104] pod "coredns-5dd5756b68-l88tx" is not "Ready", error: <nil>
	W1219 03:05:08.958366  566718 pod_ready.go:104] pod "coredns-5dd5756b68-l88tx" is not "Ready", error: <nil>
	I1219 03:05:05.245656  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:05.747655  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:06.245722  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:06.748049  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:07.245806  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:07.806712  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:08.317551  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:08.746359  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:09.246666  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:09.745789  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:06.432631  573699 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:05:06.442236  573699 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:05:06.442267  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:06.938273  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:07.436226  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:07.935844  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:08.437222  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:08.937396  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:09.436432  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:09.937420  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:10.436795  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:07.982775  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:08.484705  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:08.983943  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:09.483954  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:09.984850  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:10.484036  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:10.985868  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:11.484253  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:11.984283  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:12.483325  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	W1219 03:05:11.457419  566718 pod_ready.go:104] pod "coredns-5dd5756b68-l88tx" is not "Ready", error: <nil>
	W1219 03:05:13.957361  566718 pod_ready.go:104] pod "coredns-5dd5756b68-l88tx" is not "Ready", error: <nil>
	I1219 03:05:10.247114  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:10.746688  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:11.246179  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:11.747053  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:12.245687  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:12.745963  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:13.245905  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:13.745641  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:14.245181  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:14.746110  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:10.937352  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:11.436009  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:11.937001  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:12.437140  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:12.937021  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:13.436775  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:13.936272  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:14.435921  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:14.937045  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:15.436754  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:12.983838  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:13.483669  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:13.983389  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:14.483140  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:14.983678  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:15.483333  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:15.983426  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:16.483195  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:16.982683  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:17.483883  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	W1219 03:05:16.457830  566718 pod_ready.go:104] pod "coredns-5dd5756b68-l88tx" is not "Ready", error: <nil>
	W1219 03:05:18.956955  566718 pod_ready.go:104] pod "coredns-5dd5756b68-l88tx" is not "Ready", error: <nil>
	I1219 03:05:15.245238  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:15.746028  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:16.245738  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:16.746152  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:17.245944  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:17.745478  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:18.244810  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:18.745484  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:19.245267  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:19.747027  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:15.935367  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:16.437144  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:16.936697  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:17.436257  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:17.938151  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:18.436806  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:18.936368  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:19.436056  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:19.936823  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:20.436574  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:20.956728  566718 pod_ready.go:94] pod "coredns-5dd5756b68-l88tx" is "Ready"
	I1219 03:05:20.956755  566718 pod_ready.go:86] duration metric: took 16.505943894s for pod "coredns-5dd5756b68-l88tx" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:20.959784  566718 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-002036" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:20.964097  566718 pod_ready.go:94] pod "etcd-old-k8s-version-002036" is "Ready"
	I1219 03:05:20.964121  566718 pod_ready.go:86] duration metric: took 4.312579ms for pod "etcd-old-k8s-version-002036" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:20.967209  566718 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-002036" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:20.971311  566718 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-002036" is "Ready"
	I1219 03:05:20.971340  566718 pod_ready.go:86] duration metric: took 4.107095ms for pod "kube-apiserver-old-k8s-version-002036" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:20.974403  566718 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-002036" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:21.155192  566718 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-002036" is "Ready"
	I1219 03:05:21.155230  566718 pod_ready.go:86] duration metric: took 180.802142ms for pod "kube-controller-manager-old-k8s-version-002036" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:21.356374  566718 pod_ready.go:83] waiting for pod "kube-proxy-666m9" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:21.755068  566718 pod_ready.go:94] pod "kube-proxy-666m9" is "Ready"
	I1219 03:05:21.755101  566718 pod_ready.go:86] duration metric: took 398.695005ms for pod "kube-proxy-666m9" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:21.955309  566718 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-002036" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:22.355240  566718 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-002036" is "Ready"
	I1219 03:05:22.355268  566718 pod_ready.go:86] duration metric: took 399.930732ms for pod "kube-scheduler-old-k8s-version-002036" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:22.355280  566718 pod_ready.go:40] duration metric: took 17.911572961s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:05:22.403101  566718 start.go:625] kubectl: 1.35.0, cluster: 1.28.0 (minor skew: 7)
	I1219 03:05:22.405195  566718 out.go:203] 
	W1219 03:05:22.406549  566718 out.go:285] ! /usr/local/bin/kubectl is version 1.35.0, which may have incompatibilities with Kubernetes 1.28.0.
	I1219 03:05:22.407721  566718 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1219 03:05:22.409075  566718 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-002036" cluster and "default" namespace by default
	I1219 03:05:17.983934  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:18.483978  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:18.983469  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:19.483031  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:19.983935  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:20.483856  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:20.983202  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:21.482976  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:21.983682  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:22.483477  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:20.246405  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:20.745732  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:21.246211  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:21.745513  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:22.246072  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:22.746161  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:23.245454  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:23.745802  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:24.246011  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:24.745886  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:20.936632  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:21.436414  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:21.937387  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:22.438356  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:22.936036  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:23.436638  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:23.936345  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:24.436285  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:24.936343  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:25.436214  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:22.983526  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:23.483608  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:23.984007  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:24.483768  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:24.983330  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:25.483626  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:25.983245  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:26.483703  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:26.983688  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:27.483645  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:25.245298  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:25.745913  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:26.246357  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:26.746837  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:27.245727  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:27.745064  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:28.245698  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:28.745390  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:29.245749  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:29.746545  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:25.936442  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:26.436179  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:26.936807  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:27.436692  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:27.936427  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:28.436416  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:28.936100  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:29.436165  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:29.936887  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:30.437744  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:27.983729  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:28.484151  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:28.982796  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:29.483575  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:29.983807  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:30.484546  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:30.983311  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:31.482703  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:31.984055  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:32.483041  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:30.245841  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:30.746191  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:31.246984  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:31.746555  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:32.245535  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:32.745692  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:33.245430  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:33.746001  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:34.245532  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:34.745216  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:30.936806  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:31.437044  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:31.937073  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:32.436137  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:32.937365  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:33.436414  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:33.936352  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:34.435813  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:34.936438  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:35.435923  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:32.984055  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:33.483382  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:33.984500  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:34.483032  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:34.984071  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:35.482466  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:35.983161  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:36.482900  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:36.983524  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:37.483681  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:35.245754  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:35.745276  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:36.246044  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:36.747272  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:37.246098  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:37.746535  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:38.245821  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:38.745937  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:39.245762  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:39.745615  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:35.936381  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:36.436916  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:36.936622  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:37.436000  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:37.937259  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:38.437162  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:38.937047  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:39.437352  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:39.936682  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:40.436615  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:37.983600  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:38.483773  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:38.983567  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:39.483752  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:39.983264  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:40.483703  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:40.983322  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:41.483362  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:41.983957  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:42.484274  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:40.246185  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:40.745901  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:41.245621  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:41.746459  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:42.246128  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:42.745336  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:43.245863  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:43.745349  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:44.245848  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:44.745183  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:40.938808  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:41.437447  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:41.936560  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:42.436119  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:42.935681  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:43.436727  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:43.936625  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:44.436379  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:44.936023  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:45.436852  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:42.983002  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:43.484428  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:43.983133  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:44.484439  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:44.983087  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:45.483617  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:45.983057  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:46.483126  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:46.982743  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:47.483122  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:45.245621  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:45.747099  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:46.245089  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:46.746901  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:47.245684  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:47.745166  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:48.245353  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:48.745700  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:49.245083  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:49.745319  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:45.936637  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:46.436382  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:46.935972  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:47.436262  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:47.937175  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:48.435775  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:48.936174  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:49.436927  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:49.936454  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:50.436467  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:47.983769  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:48.483797  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:48.983564  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:49.484562  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:49.983390  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:50.483073  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:50.984121  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:51.482952  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:51.983943  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:52.483850  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:50.245533  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:50.746378  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:51.246407  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:51.746164  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:52.245475  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:52.746473  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:53.245686  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:53.745616  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:54.246701  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:54.746221  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:50.937461  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:51.437102  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:51.937100  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:52.436658  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:52.936554  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:53.436723  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:53.935859  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:54.436301  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:54.936888  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:55.435693  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:52.983429  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:53.484287  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:53.983438  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:54.484255  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:54.982975  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:55.483937  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:55.984116  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:56.483076  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:56.982483  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:57.484172  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:55.245635  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:55.746068  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:56.245613  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:56.746431  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:57.245784  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:57.746179  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:58.246036  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:58.745916  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.246105  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.745511  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:55.936404  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:56.436974  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:56.937181  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:57.436933  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:57.936461  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:58.435893  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:58.936715  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.435977  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.936537  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:00.436413  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:57.984117  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:58.483494  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:58.983431  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.483144  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.983693  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:00.483725  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:00.983769  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:01.483568  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:01.983844  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:02.484041  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:00.247210  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:00.745901  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:01.246917  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:01.746507  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:02.246482  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:02.745791  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:03.246149  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:03.745750  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:04.246542  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:04.746182  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:00.935753  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:01.437052  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:01.936399  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:02.437035  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:02.936175  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:03.437157  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:03.936167  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:04.437079  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:04.936622  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:05.435994  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:02.984052  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:03.484159  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:03.983491  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:04.483027  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:04.984206  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:05.482988  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:05.984416  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:06.482988  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:06.983673  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:07.483363  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:05.245974  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:05.745879  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:06.246325  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:06.746954  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:07.246178  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:07.746530  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:08.246617  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:08.746319  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:09.246086  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:09.745852  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:05.937050  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:06.436626  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:06.935960  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:07.436359  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:07.936462  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:08.436428  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:08.936121  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:09.436717  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:09.936724  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:10.436653  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:07.983609  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:08.483348  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:08.983602  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:09.483970  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:09.984565  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:10.483846  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:10.983764  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:11.483646  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:11.983995  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:12.483230  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:10.246294  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:10.746747  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:11.245812  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:11.746679  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:12.246641  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:12.745478  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:13.245869  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:13.745759  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:14.245568  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:14.746073  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:10.936517  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:11.435795  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:11.937696  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:12.436353  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:12.935510  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:13.436005  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:13.936614  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:14.436666  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:14.937104  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:15.436494  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:12.982961  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:13.483812  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:13.984205  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:14.484367  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:14.983535  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:15.483245  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:15.982974  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:16.483840  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:16.983639  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:17.483076  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:15.245741  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:15.746076  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:16.245863  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:16.746268  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:17.245914  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:17.745460  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:18.246201  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:18.745720  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:19.246075  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:19.746406  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:15.936573  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:16.436355  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:16.935609  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:17.436112  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:17.936695  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:18.436177  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:18.936615  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:19.436180  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:19.936693  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:20.436473  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:17.984187  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:18.484214  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:18.983011  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:19.483899  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:19.984512  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:20.482716  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:20.983406  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:21.483536  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:21.985122  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:22.483290  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:20.246645  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:20.746554  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:21.245477  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:21.746237  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:22.246559  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:22.746156  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:23.245694  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:23.744920  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:24.246400  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:24.745171  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:20.936301  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:21.435818  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:21.936473  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:22.436319  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:22.937291  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:23.436967  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:23.936394  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:24.436573  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:24.936226  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:25.436480  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:22.983215  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:23.483166  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:23.983561  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:24.484067  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:24.983180  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:25.483488  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:25.983441  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:26.482752  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:26.983544  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:27.482808  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:25.245475  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:25.746511  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:26.245967  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:26.746303  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:27.245996  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:27.745286  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:28.246778  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:28.745279  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:29.245781  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:29.745086  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:25.936101  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:26.437131  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:26.936600  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:27.436041  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:27.937177  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:28.437421  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:28.935735  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:29.436019  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:29.936601  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:30.437190  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:27.984252  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:28.483837  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:28.983552  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:29.483241  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:29.983514  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:30.482704  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:30.983246  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:31.482944  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:31.984320  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:32.483797  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:30.246209  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:30.745803  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:31.245503  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:31.746263  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:32.246768  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:32.745863  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:33.245185  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:33.745549  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:34.245747  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:34.746416  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:30.935759  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:31.435954  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:31.936473  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:32.436706  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:32.936420  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:33.436605  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:33.937043  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:34.437152  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:34.936804  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:35.436211  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:32.983286  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:33.483036  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:33.984179  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:34.485767  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:34.983683  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:35.484067  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:35.983146  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:36.483037  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:36.982889  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:37.483748  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:35.245980  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:35.745904  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:36.246066  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:36.747073  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:37.246061  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:37.746010  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:38.246278  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:38.745926  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:39.245654  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:39.745463  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:35.935859  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:36.437530  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:36.936253  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:37.436942  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:37.936253  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:38.437229  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:38.936794  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:39.436501  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:39.936447  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:40.436258  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:37.983789  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:38.483692  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:38.983255  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:39.483001  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:39.982877  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:40.483721  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:40.983399  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:41.482771  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:41.983968  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:42.483847  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:40.246603  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:40.745229  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:41.245985  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:41.746233  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:42.246354  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:42.746354  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:43.245729  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:43.745993  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:44.246174  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:44.745977  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:40.936569  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:41.436604  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:41.936997  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:42.436608  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:42.936332  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:43.436076  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:43.937096  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:44.437052  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:44.936644  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:45.436313  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:42.983561  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:43.483231  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:43.983328  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:44.483130  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:44.983671  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:45.484255  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:45.984498  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:46.483267  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:46.982818  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:47.483172  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:45.246007  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:45.745814  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:46.246281  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:46.746636  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:47.245338  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:47.746505  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:48.246541  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:48.745349  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:49.246003  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:49.746025  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:45.935627  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:46.437425  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:46.937256  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:47.436775  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:47.936905  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:48.436271  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:48.936170  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:49.436681  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:49.937261  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:50.436230  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:47.983908  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:48.483646  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:48.983761  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:49.483697  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:49.983928  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:50.484339  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:50.983038  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:51.483830  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:51.983519  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:52.482841  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:50.246203  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:50.745909  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:51.245212  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:51.746317  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:52.246429  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:52.746706  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:53.245252  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:53.746054  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:54.248935  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:54.745879  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:50.936569  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:51.436150  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:51.937541  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:52.436306  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:52.937380  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:53.437032  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:53.937256  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:54.437101  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:54.936394  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:55.435707  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:52.983425  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:53.482996  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:53.984413  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:54.483150  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:54.983223  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:55.483220  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:55.983167  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:56.482640  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:56.983417  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:57.483783  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:55.245215  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:55.745981  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:56.246277  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:56.747053  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:57.245861  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:57.745707  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:58.245371  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:58.746200  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:59.245515  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:59.745912  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:55.936135  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:56.437841  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:56.936910  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:57.436323  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:57.936660  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:58.436524  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:58.936221  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:59.436563  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:59.935913  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:00.436645  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:57.984125  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:58.483388  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:58.982737  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:59.483773  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:59.983545  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:00.483422  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:00.983154  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:01.483664  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:01.983641  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:02.483442  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:00.245728  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:00.745308  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:01.246025  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:01.745765  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:02.246408  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:02.746848  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:03.245127  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:03.746104  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:04.246223  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:04.745981  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:00.936306  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:01.437231  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:01.937148  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:02.437052  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:02.936729  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:03.436019  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:03.936896  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:04.436656  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:04.936521  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:05.435899  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:02.983225  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:03.483720  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:03.983254  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:04.483258  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:04.983295  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:05.483964  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:05.984348  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:06.483161  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:06.983777  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:07.483360  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:05.245839  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:05.745805  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:06.245976  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:06.745981  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:07.245994  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:07.745694  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:08.245465  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:08.748052  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:09.245632  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:09.745648  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:05.936748  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:06.436721  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:06.935970  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:07.436670  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:07.936857  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:08.436351  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:08.936092  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:09.436265  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:09.936566  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:10.437204  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:07.983552  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:08.483025  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:08.984084  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:09.482696  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:09.984384  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:10.482907  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:10.983542  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:11.483867  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:11.983960  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:12.484193  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:10.246433  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:10.745045  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:11.245844  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:11.745925  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:12.245788  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:12.745757  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:13.245844  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:13.744949  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:14.245762  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:14.745558  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:10.936675  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:11.436272  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:11.937272  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:12.436971  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:12.936377  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:13.435972  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:13.936779  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:14.436521  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:14.936619  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:15.436449  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:12.983555  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:13.483112  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:13.983119  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:14.483571  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:14.983564  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:15.483968  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:15.985107  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:16.482973  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:16.983852  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:17.483706  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:15.246146  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:15.745442  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:16.245478  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:16.745851  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:17.245620  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:17.745179  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:18.245868  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:18.746515  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:19.245146  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:19.746353  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:15.937638  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:16.436053  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:16.936310  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:17.436971  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:17.936846  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:18.436790  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:18.936696  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:19.436200  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:19.936118  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:20.437161  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:17.983286  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:18.483618  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:18.983321  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:19.484098  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:19.982957  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:20.484192  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:20.982797  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:21.483503  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:21.983073  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:22.483344  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:20.246025  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:20.745956  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:21.245670  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:21.745542  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:22.245486  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:22.745743  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:23.246417  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:23.746516  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:24.245958  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:24.746331  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:20.936566  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:21.435804  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:21.936340  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:22.436902  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:22.936275  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:23.437058  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:23.936691  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:24.436512  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:24.936664  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:25.436248  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:22.983164  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:23.483224  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:23.983637  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:24.483793  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:24.983642  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:25.484002  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:25.983546  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:26.483485  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:26.983175  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:27.483045  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:25.246376  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:25.746128  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:26.246162  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:26.747301  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:27.245957  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:27.745993  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:28.245413  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:28.746300  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:29.246016  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:29.745826  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:25.936102  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:26.436787  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:26.936813  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:27.436289  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:27.937146  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:28.437238  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:28.937126  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:29.436740  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:29.936271  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:30.437515  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:27.983720  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:28.484073  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:28.982865  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:29.483679  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:29.983626  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:30.484049  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:30.983790  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:31.483561  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:31.983415  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:32.483614  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:30.245267  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:30.746185  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:31.246095  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:31.746548  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:32.245436  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:32.745151  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:33.246297  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:33.746437  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:34.246245  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:34.746404  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:30.936298  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:31.436831  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:31.936714  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:32.436596  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:32.936067  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:33.436898  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:33.936839  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:34.436572  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:34.936153  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:35.436037  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:32.983547  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:33.483091  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:33.983273  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:34.483523  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:34.982933  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:35.483553  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:35.983907  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:36.484242  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:36.983005  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:37.483666  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:35.245928  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:35.745885  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:36.245358  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:36.747236  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:37.245813  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:37.745544  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:38.245252  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:38.746445  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:39.245380  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:39.746275  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:35.936921  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:36.436862  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:36.936442  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:37.437100  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:37.936746  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:38.436661  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:38.936108  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:39.436741  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:39.937134  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:40.437138  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:37.984072  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:38.483408  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:38.982980  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:39.483839  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:39.983815  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:40.484237  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:40.982748  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:41.483227  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:41.983491  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:42.483502  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:40.246302  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:40.746840  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:41.245743  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:41.745752  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:42.245764  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:42.745565  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:43.245413  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:43.745818  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:44.245622  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:44.746548  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:40.936117  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:41.436793  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:41.937328  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:42.436385  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:42.937184  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:43.437161  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:43.936755  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:44.436384  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:44.937437  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:45.436119  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:42.983935  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:43.483872  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:43.983964  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:44.484354  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:44.983693  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:45.483534  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:45.983273  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:46.483358  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:46.983949  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:47.483681  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:45.245051  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:45.745840  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:46.245710  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:46.747059  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:47.245761  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:47.746224  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:48.245979  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:48.746397  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:49.246462  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:49.745161  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:45.936393  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:46.435574  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:46.936269  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:47.436736  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:47.935923  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:48.436191  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:48.937125  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:49.436724  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:49.936060  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:50.436464  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:47.983875  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:48.483646  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:48.983702  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:49.483743  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:49.983649  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:50.484353  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:50.984025  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:51.484106  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:51.983289  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:52.483003  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:50.245241  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:50.746800  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:51.245636  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:51.745903  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:52.245501  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:52.746786  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:53.245828  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:53.746731  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:54.245243  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:54.746109  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:50.936423  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:51.436185  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:51.937335  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:52.435811  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:52.936607  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:53.437193  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:53.937024  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:54.436703  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:54.936452  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:55.436033  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:52.982921  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:53.483790  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:53.984334  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:54.483331  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:54.983338  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:55.483024  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:55.983619  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:56.483807  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:56.983721  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:57.483219  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:55.245525  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:55.745310  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:56.246066  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:56.748380  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:57.246087  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:57.746200  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:58.246172  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:58.746116  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:59.246000  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:59.745364  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:55.938959  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:56.436375  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:56.936439  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:57.435973  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:57.936388  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:58.435955  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:58.937067  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:59.436689  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:59.936873  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:00.436068  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:57.983216  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:58.483545  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:58.982893  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:59.483703  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:59.983507  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:00.483848  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:00.983741  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:01.483139  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:01.982982  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:02.483474  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:00.245849  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:00.745943  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:01.245514  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:01.745976  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:02.245776  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:02.745774  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:03.246195  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:03.746287  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:04.246174  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:04.745795  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:00.937291  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:01.436517  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:01.937126  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:02.437088  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:02.936378  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:03.435816  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:03.936486  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:04.436861  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:04.936773  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:05.437070  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:02.983196  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:03.482648  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:03.984096  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:04.483607  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:04.983532  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:05.483828  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:05.983686  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:06.484218  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:06.984889  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:07.484117  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:05.245432  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:05.746171  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:06.246148  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:06.746794  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:07.245134  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:07.745858  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:08.245332  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:08.746680  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:09.245744  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:09.745345  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:05.935722  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:06.437147  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:06.937110  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:07.436107  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:07.936683  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:08.437338  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:08.937224  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:09.435895  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:09.936364  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:10.436440  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:07.984241  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:08.483451  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:08.983165  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:09.483042  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:09.982951  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:10.484340  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:10.983004  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:11.483822  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:11.983489  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:12.483877  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:10.246451  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:10.746155  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:11.246021  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:11.745725  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:12.245017  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:12.747153  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:13.246746  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:13.745692  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:14.245869  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:14.745814  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:10.937288  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:11.436218  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:11.937058  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:12.436201  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:12.936942  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:13.436514  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:13.937227  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:14.435900  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:14.937246  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:15.437248  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:12.983685  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:13.483319  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:13.983759  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:14.483672  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:14.983171  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:15.482646  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:15.983174  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:16.483545  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:16.983864  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:17.484102  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:15.245723  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:15.745561  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:16.247817  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:16.747200  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:17.246180  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:17.746059  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:18.245772  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:18.746003  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:19.245769  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:19.745631  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:15.935465  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:16.436710  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:16.936296  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:17.436222  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:17.937015  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:18.437019  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:18.937083  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:19.436796  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:19.936995  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:20.437457  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:17.983571  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:18.483942  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:18.983638  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:19.483595  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:19.982839  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:20.484503  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:20.983773  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:21.483765  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:21.983647  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:22.483706  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:20.246047  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:20.746223  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:21.245764  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:21.746404  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:22.246013  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:22.745963  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:23.245843  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:23.745567  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:24.246427  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:24.746391  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:20.937102  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:21.435564  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:21.936469  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:22.436649  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:22.936880  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:23.436778  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:23.936059  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:24.437189  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:24.937170  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:25.436704  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:22.982868  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:23.484268  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:23.983374  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:24.483212  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:24.983344  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:25.483884  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:25.983398  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:26.484023  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:26.984234  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:27.483988  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:25.246093  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:25.745866  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:26.245647  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:26.747173  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:27.245862  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:27.745538  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:28.245299  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:28.746103  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:29.245506  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:29.746350  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:25.937269  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:26.435729  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:26.936734  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:27.436476  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:27.936918  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:28.436636  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:28.936510  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:29.436255  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:29.936175  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:30.436005  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:27.983312  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:28.484050  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:28.983339  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:29.482531  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:29.982929  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:30.483747  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:30.983500  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:31.482861  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:31.983253  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:32.484296  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:30.245816  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:30.745632  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:31.245311  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:31.746323  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:32.246307  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:32.746634  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:33.245352  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:33.746294  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:34.246399  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:34.746747  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:30.937031  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:31.436676  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:31.936840  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:32.436650  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:32.936793  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:33.436310  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:33.936030  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:34.437178  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:34.937165  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:35.436157  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:32.983447  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:33.484087  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:33.983935  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:34.484195  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:34.982889  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:35.483424  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:35.982827  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:36.483920  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:36.984144  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:37.484302  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:35.245293  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:35.746004  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:36.245793  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:36.746989  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:37.245794  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:37.746839  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:38.245459  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:38.746688  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:39.245861  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:39.745472  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:35.937370  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:36.435903  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:36.936747  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:37.436447  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:37.937054  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:38.437019  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:38.937481  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:39.436189  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:39.936333  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:40.436131  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:37.983136  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:38.484093  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:38.983753  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:39.483392  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:39.983335  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:40.483238  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:40.982643  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:41.483017  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:41.983148  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:42.484213  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:40.245696  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:40.745797  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:41.245831  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:41.745795  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:42.245558  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:42.745449  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:43.246006  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:43.746105  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:44.246305  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:44.746990  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:40.936241  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:41.436869  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:41.936851  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:42.436552  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:42.936544  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:43.436217  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:43.936790  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:44.435881  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:44.937211  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:45.435800  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:42.982609  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:43.483184  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:43.984245  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:44.483444  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:44.983516  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:45.482273  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:45.982784  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:46.483318  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:46.983225  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:47.484299  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:45.245057  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:45.745991  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:46.245705  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:46.746558  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:47.245976  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:47.745649  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:48.245488  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:48.745691  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:49.245062  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:49.745495  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:45.936018  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:46.437324  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:46.936366  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:47.436108  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:47.936330  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:48.435727  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:48.936825  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:49.436120  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:49.937117  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:50.436125  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:47.986039  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:48.483907  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:48.983409  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:49.483362  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:49.982827  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:50.483536  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:50.983035  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:51.483293  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:51.983566  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:52.483534  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:50.246060  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:50.746141  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:51.245517  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:51.745461  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:52.246136  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:52.746190  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:53.246005  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:53.745779  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:54.245690  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:54.746440  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:50.936858  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:51.436399  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:51.936936  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:52.436270  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:52.936040  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:53.436627  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:53.935956  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:54.436964  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:54.937066  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:55.437181  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:52.982975  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:53.483774  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:53.983188  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:54.484313  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:54.983476  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:55.483624  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:55.983235  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:56.484059  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:56.983666  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:57.483836  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:55.246365  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:55.746334  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:56.246033  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:56.746651  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:57.245323  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:57.746357  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:58.245635  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:58.745658  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:59.245395  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:59.745819  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:55.936516  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:56.436483  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:56.936444  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:57.436852  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:57.936892  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:58.436633  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:58.936620  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:59.436269  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:59.936896  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:00.436566  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:57.983297  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:58.484464  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:58.982982  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:59.483511  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:59.982836  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:00.483736  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:00.983424  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:01.483308  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:01.982575  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:02.483472  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:00.245397  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:00.746693  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:01.245417  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:01.745772  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:02.245937  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:02.745980  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:03.245966  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:03.745540  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:04.245125  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:04.746311  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:00.937461  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:01.436345  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:01.937223  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:02.436491  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:02.936542  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:03.436156  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:03.936757  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:04.436434  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:04.936827  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:05.437143  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:02.983140  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:03.483948  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:03.983404  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:04.484135  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:04.983017  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:05.483191  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:05.983258  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:06.483593  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:06.982879  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:07.482719  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:05.245937  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:05.745523  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:06.246156  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:06.746714  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:07.245457  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:07.745845  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:08.245496  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:08.745521  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:09.246211  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:09.745647  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:05.936297  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:06.435928  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:06.936499  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:07.435693  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:07.935885  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:08.436830  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:08.937053  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:09.436174  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:09.936555  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:10.436004  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:07.983540  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:08.483013  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:08.983280  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:09.483326  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:09.983039  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:10.483498  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:10.983057  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:11.483944  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:11.983380  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:12.483057  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:10.246452  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:10.746248  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:11.246124  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:11.746214  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:12.245557  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:12.746434  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:13.245268  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:13.746177  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:14.245924  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:14.747881  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:10.936969  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:11.436315  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:11.936145  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:12.435740  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:12.937011  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:13.437024  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:13.935613  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:14.436125  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:14.937024  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:15.436909  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:12.984340  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:13.483254  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:13.984703  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:14.483313  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:14.982835  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:15.483493  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:15.982869  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:16.483978  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:16.983946  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:17.483204  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:15.245275  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:15.746276  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:16.245920  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:16.746771  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:17.245651  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:17.746110  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:18.245525  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:18.744791  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:19.245637  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:19.745922  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:15.936545  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:16.436043  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:16.937153  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:17.435953  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:17.937080  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:18.435670  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:18.936110  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:19.435657  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:19.935804  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:20.436240  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:17.983897  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:18.483952  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:18.984052  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:19.484088  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:19.983714  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:20.483215  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:20.983277  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:21.483667  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:21.982875  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:22.483370  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:20.245437  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:20.745749  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:21.246263  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:21.746404  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:22.245277  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:22.745620  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:23.245283  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:23.745807  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:24.245525  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:24.745496  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:20.935998  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:21.436702  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:21.936853  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:22.436414  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:22.936508  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:23.435898  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:23.938866  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:24.436406  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:24.936267  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:25.435443  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:22.983387  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:23.483176  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:23.984078  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:24.483842  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:24.983908  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:25.483314  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:25.983685  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:26.482841  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:26.984025  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:27.483709  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:25.246278  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:25.746235  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:26.246283  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:26.746411  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:27.246592  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:27.745927  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:28.245680  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:28.745389  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:29.246386  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:29.745671  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:25.936495  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:26.436178  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:26.937066  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:27.435968  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:27.936852  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:28.436035  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:28.936880  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:29.436057  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:29.936860  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:30.436717  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:27.983478  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:28.483606  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:28.984122  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:29.490050  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:29.982603  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:30.483055  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:30.984015  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:31.483501  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:31.982832  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:32.483241  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:30.245020  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:30.745924  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:31.245930  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:31.745911  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:32.245548  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:32.745201  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:33.245713  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:33.745983  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:34.245893  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:34.745539  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:30.935985  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:31.436747  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:31.936740  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:32.436110  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:32.937088  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:33.436764  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:33.936466  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:34.436043  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:34.936724  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:35.436386  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:32.983173  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:33.483859  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:33.983142  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:34.483205  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:34.984166  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:35.483826  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:35.983185  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:36.484158  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:36.984358  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:37.482832  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:35.246393  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:35.745896  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:36.245850  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:36.746287  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:37.246273  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:37.747864  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:38.245616  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:38.745334  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:39.246449  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:39.744981  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:35.936971  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:36.436804  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:36.936804  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:37.436958  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:37.936877  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:38.436656  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:38.936136  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:39.435670  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:39.935933  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:40.436774  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:37.983744  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:38.482921  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:38.983872  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:39.483540  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:39.984141  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:40.483479  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:40.984063  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:41.483205  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:41.983552  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:42.483481  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:40.245548  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:40.746558  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:41.246611  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:41.745533  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:42.245131  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:42.746326  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:43.246887  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:43.745358  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:44.246189  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:44.745991  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:40.937573  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:41.435677  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:41.936406  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:42.435935  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:42.936714  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:43.436043  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:43.936827  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:44.435885  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:44.936556  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:45.436774  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:42.983487  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:43.483622  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:43.983361  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:44.482912  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:44.983873  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:45.482660  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:45.982839  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:46.483503  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:46.983067  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:47.483638  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:45.245846  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:45.746643  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:46.245931  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:46.746121  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:47.246355  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:47.745777  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:48.245928  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:48.745620  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:49.246014  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:49.745623  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:45.936490  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:46.437169  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:46.936638  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:47.435797  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:47.937106  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:48.436462  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:48.935673  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:49.435921  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:49.936345  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:50.435704  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:47.983064  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:48.483495  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:48.983383  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:49.482815  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:49.983133  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:50.483521  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:50.983458  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:51.483539  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:51.982669  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:52.482740  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:50.245254  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:50.746529  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:51.246403  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:51.746576  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:52.245194  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:52.745901  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:53.245791  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:53.745384  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:54.246056  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:54.745809  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:50.936502  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:51.436533  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:51.936298  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:52.436872  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:52.936965  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:53.436624  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:53.936645  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:54.435868  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:54.936019  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:55.436761  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:52.984260  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:53.483436  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:53.983307  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:54.482909  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:54.983837  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:55.482909  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:55.983703  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:56.483097  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:56.984370  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:57.483476  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:55.245416  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:55.745596  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:56.246315  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:56.746972  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:57.246432  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:57.746169  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:58.245899  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:58.745701  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:59.246684  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:59.746013  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:55.936103  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:56.436731  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:56.936130  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:57.436934  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:57.936650  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:58.435890  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:58.936552  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:59.436324  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:59.936567  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:00.436613  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:57.982857  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:58.483173  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:58.984076  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:59.483622  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:59.983152  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:00.483700  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:00.983409  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:01.483248  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:01.983111  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:02.483698  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:00.245724  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:00.746426  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:01.245360  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:01.746680  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:02.245174  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:02.746009  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:03.246343  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:03.746019  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:04.245779  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:04.745882  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:00.935947  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:01.437327  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:01.937129  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:02.436468  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:02.936473  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:03.436333  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:03.936134  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:04.436385  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:04.937151  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:05.437232  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:02.983942  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:03.483661  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:03.983172  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:04.483536  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:04.983253  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:05.483439  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:05.982645  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:06.483045  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:06.984031  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:07.483303  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:05.245641  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:05.745823  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:06.245494  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:06.746765  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:07.245879  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:07.745869  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:08.245211  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:08.746263  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:09.246504  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:09.744996  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:05.936844  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:06.436478  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:06.935984  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:07.436742  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:07.935862  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:08.436143  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:08.936623  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:09.437102  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:09.936964  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:10.436154  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:07.983001  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:08.483616  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:08.983409  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:09.483478  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:09.982888  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:10.483505  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:10.983487  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:11.482828  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:11.982887  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:12.483514  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:10.245552  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:10.745120  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:11.246143  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:11.746163  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:12.245633  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:12.745368  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:13.246475  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:13.745271  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:14.245933  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:14.745805  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:10.935671  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:11.436335  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:11.936196  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:12.436273  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:12.936625  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:13.436266  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:13.936782  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:14.436189  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:14.936448  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:15.436442  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:12.983418  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:13.483281  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:13.983117  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:14.483767  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:14.984021  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:15.483731  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:15.983275  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:16.483869  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:16.983375  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:17.482882  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:15.245668  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:15.746147  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:16.246640  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:16.746736  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:17.246420  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:17.745966  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:18.246253  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:18.745906  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:19.246303  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:19.745986  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:15.937381  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:16.436018  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:16.936466  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:17.436852  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:17.936227  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:18.437410  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:18.935713  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:19.436449  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:19.935644  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:20.435982  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:17.983311  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:18.483558  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:18.983528  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:19.483170  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:19.984155  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:20.483754  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:20.983412  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:21.483938  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:21.983465  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:22.483020  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:20.245720  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:20.745374  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:21.246756  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:21.745755  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:22.245418  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:22.746818  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:23.245897  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:23.745485  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:24.245161  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:24.746048  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:20.936177  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:21.437209  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:21.936770  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:22.436342  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:22.936061  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:23.436819  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:23.935988  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:24.436564  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:24.935683  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:25.437297  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:22.983512  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:23.484563  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:23.983146  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:24.483790  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:24.983839  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:25.483026  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:25.983149  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:26.484482  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:26.983378  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:27.482721  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:25.246464  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:25.746065  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:26.246367  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:26.746647  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:27.245786  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:27.746272  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:28.245936  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:28.745748  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:29.245512  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:29.745830  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:25.936845  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:26.436141  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:26.937290  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:27.437316  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:27.936601  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:28.435947  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:28.936694  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:29.436517  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:29.936790  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:30.436457  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:27.983105  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:28.483646  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:28.983252  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:29.483908  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:29.983724  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:30.483864  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:30.983787  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:31.483233  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:31.983574  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:32.482995  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:30.245383  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:30.746128  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:31.246198  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:31.746431  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:32.246100  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:32.746119  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:33.246290  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:33.746036  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:34.245863  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:34.745323  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:30.936983  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:31.437037  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:31.936606  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:32.435930  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:32.936507  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:33.436189  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:33.936455  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:34.435839  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:34.935933  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:35.436995  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:32.984129  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:33.484212  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:33.984303  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:34.483306  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:34.983518  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:35.482738  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:35.982612  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:36.483504  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:36.983434  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:37.482971  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:35.246296  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:35.746349  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:36.247475  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:36.746626  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:37.246070  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:37.746520  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:38.245142  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:38.745887  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:39.245695  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:39.745960  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:35.936318  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:36.435767  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:36.936550  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:37.436719  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:37.935917  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:38.435988  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:38.936787  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:39.436849  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:39.935749  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:40.436170  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:37.983100  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:38.483708  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:38.983679  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:39.483567  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:39.983364  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:40.483546  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:40.983622  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:41.484178  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:41.983532  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:42.482768  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:40.245506  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:40.745743  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:41.246985  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:41.746088  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:42.245673  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:42.746257  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:43.246242  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:43.745638  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:44.246113  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:44.745493  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:40.936613  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:41.436769  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:41.937022  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:42.436509  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:42.936170  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:43.436799  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:43.935953  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:44.436096  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:44.936230  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:45.436315  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:42.983678  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:43.482661  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:43.984210  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:44.482755  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:44.983557  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:45.483535  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:45.982947  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:46.483792  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:46.983100  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:47.484233  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:45.246539  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:45.746528  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:46.246174  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:46.746739  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:47.245697  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:47.745790  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:48.245070  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:48.745400  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:49.246339  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:49.745958  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:45.935928  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:46.436394  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:46.936522  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:47.436247  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:47.936524  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:48.436518  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:48.936708  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:49.437978  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:49.936041  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:50.436496  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:47.982762  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:48.483205  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:48.983515  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:49.483024  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:49.982989  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:50.483821  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:50.983511  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:51.482875  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:51.983288  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:52.483464  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:50.245892  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:50.745342  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:51.246251  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:51.746455  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:52.246114  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:52.745679  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:53.243066  568301 kapi.go:81] temporary error: getting Pods with label selector "app.kubernetes.io/name=kubernetes-dashboard-web" : [client rate limiter Wait returned an error: context deadline exceeded]
	I1219 03:10:53.243101  568301 kapi.go:107] duration metric: took 6m0.001125868s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	W1219 03:10:53.243227  568301 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [waiting for app.kubernetes.io/name=kubernetes-dashboard-web pods: context deadline exceeded]
	I1219 03:10:53.244995  568301 out.go:179] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I1219 03:10:53.246175  568301 addons.go:546] duration metric: took 6m5.940868392s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I1219 03:10:53.246216  568301 start.go:247] waiting for cluster config update ...
	I1219 03:10:53.246230  568301 start.go:256] writing updated cluster config ...
	I1219 03:10:53.246533  568301 ssh_runner.go:195] Run: rm -f paused
	I1219 03:10:53.251613  568301 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:10:53.256756  568301 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qmb9z" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:53.261260  568301 pod_ready.go:94] pod "coredns-66bc5c9577-qmb9z" is "Ready"
	I1219 03:10:53.261285  568301 pod_ready.go:86] duration metric: took 4.502294ms for pod "coredns-66bc5c9577-qmb9z" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:53.263432  568301 pod_ready.go:83] waiting for pod "etcd-embed-certs-536489" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:53.267796  568301 pod_ready.go:94] pod "etcd-embed-certs-536489" is "Ready"
	I1219 03:10:53.267819  568301 pod_ready.go:86] duration metric: took 4.363443ms for pod "etcd-embed-certs-536489" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:53.269959  568301 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-536489" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:53.273954  568301 pod_ready.go:94] pod "kube-apiserver-embed-certs-536489" is "Ready"
	I1219 03:10:53.273978  568301 pod_ready.go:86] duration metric: took 3.994974ms for pod "kube-apiserver-embed-certs-536489" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:53.276324  568301 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-536489" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:53.655995  568301 pod_ready.go:94] pod "kube-controller-manager-embed-certs-536489" is "Ready"
	I1219 03:10:53.656024  568301 pod_ready.go:86] duration metric: took 379.67922ms for pod "kube-controller-manager-embed-certs-536489" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:53.856274  568301 pod_ready.go:83] waiting for pod "kube-proxy-qhlhx" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:54.256232  568301 pod_ready.go:94] pod "kube-proxy-qhlhx" is "Ready"
	I1219 03:10:54.256260  568301 pod_ready.go:86] duration metric: took 399.957557ms for pod "kube-proxy-qhlhx" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:54.456456  568301 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-536489" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:54.856278  568301 pod_ready.go:94] pod "kube-scheduler-embed-certs-536489" is "Ready"
	I1219 03:10:54.856307  568301 pod_ready.go:86] duration metric: took 399.821962ms for pod "kube-scheduler-embed-certs-536489" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:54.856318  568301 pod_ready.go:40] duration metric: took 1.60467121s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:10:54.908914  568301 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 03:10:54.910224  568301 out.go:179] * Done! kubectl is now configured to use "embed-certs-536489" cluster and "default" namespace by default
	I1219 03:10:50.936043  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:51.437199  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:51.937554  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:52.436648  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:52.935325  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:53.437090  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:53.936467  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:54.435747  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:54.937514  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:55.437259  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:52.983483  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:53.483110  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:53.984179  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:54.483441  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:54.983571  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:55.482976  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:55.983723  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:56.483799  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:56.983265  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:57.482795  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:57.980094  569947 kapi.go:107] duration metric: took 6m0.000564024s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	W1219 03:10:57.980271  569947 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [waiting for app.kubernetes.io/name=kubernetes-dashboard-web pods: context deadline exceeded]
	I1219 03:10:57.982221  569947 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1219 03:10:57.983556  569947 addons.go:546] duration metric: took 6m7.330731268s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1219 03:10:57.983643  569947 start.go:247] waiting for cluster config update ...
	I1219 03:10:57.983661  569947 start.go:256] writing updated cluster config ...
	I1219 03:10:57.983965  569947 ssh_runner.go:195] Run: rm -f paused
	I1219 03:10:57.988502  569947 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:10:57.993252  569947 pod_ready.go:83] waiting for pod "coredns-7d764666f9-hm5hz" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:57.997922  569947 pod_ready.go:94] pod "coredns-7d764666f9-hm5hz" is "Ready"
	I1219 03:10:57.997946  569947 pod_ready.go:86] duration metric: took 4.66305ms for pod "coredns-7d764666f9-hm5hz" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:58.000317  569947 pod_ready.go:83] waiting for pod "etcd-no-preload-208281" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:58.004843  569947 pod_ready.go:94] pod "etcd-no-preload-208281" is "Ready"
	I1219 03:10:58.004871  569947 pod_ready.go:86] duration metric: took 4.527165ms for pod "etcd-no-preload-208281" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:58.006889  569947 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-208281" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:58.010814  569947 pod_ready.go:94] pod "kube-apiserver-no-preload-208281" is "Ready"
	I1219 03:10:58.010843  569947 pod_ready.go:86] duration metric: took 3.912426ms for pod "kube-apiserver-no-preload-208281" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:58.012893  569947 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-208281" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:58.394606  569947 pod_ready.go:94] pod "kube-controller-manager-no-preload-208281" is "Ready"
	I1219 03:10:58.394643  569947 pod_ready.go:86] duration metric: took 381.720753ms for pod "kube-controller-manager-no-preload-208281" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:58.594310  569947 pod_ready.go:83] waiting for pod "kube-proxy-xst8w" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:58.994002  569947 pod_ready.go:94] pod "kube-proxy-xst8w" is "Ready"
	I1219 03:10:58.994037  569947 pod_ready.go:86] duration metric: took 399.698104ms for pod "kube-proxy-xst8w" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:59.194965  569947 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-208281" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:59.594191  569947 pod_ready.go:94] pod "kube-scheduler-no-preload-208281" is "Ready"
	I1219 03:10:59.594219  569947 pod_ready.go:86] duration metric: took 399.226469ms for pod "kube-scheduler-no-preload-208281" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:59.594230  569947 pod_ready.go:40] duration metric: took 1.605690954s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:10:59.642421  569947 start.go:625] kubectl: 1.35.0, cluster: 1.35.0-rc.1 (minor skew: 0)
	I1219 03:10:59.644674  569947 out.go:179] * Done! kubectl is now configured to use "no-preload-208281" cluster and "default" namespace by default
	I1219 03:10:55.937173  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:56.435825  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:56.936702  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:57.436527  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:57.936442  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:58.436611  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:58.936591  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:59.436321  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:59.937837  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:00.436459  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:00.936639  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:01.437141  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:01.936951  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:02.436292  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:02.936804  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:03.437702  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:03.936237  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:04.436721  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:04.936104  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:05.439639  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:05.936149  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:06.433765  573699 kapi.go:81] temporary error: getting Pods with label selector "app.kubernetes.io/name=kubernetes-dashboard-web" : [client rate limiter Wait returned an error: context deadline exceeded]
	I1219 03:11:06.433806  573699 kapi.go:107] duration metric: took 6m0.001182154s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	W1219 03:11:06.433932  573699 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [waiting for app.kubernetes.io/name=kubernetes-dashboard-web pods: context deadline exceeded]
	I1219 03:11:06.435864  573699 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1219 03:11:06.437280  573699 addons.go:546] duration metric: took 6m7.672932083s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1219 03:11:06.437331  573699 start.go:247] waiting for cluster config update ...
	I1219 03:11:06.437348  573699 start.go:256] writing updated cluster config ...
	I1219 03:11:06.437666  573699 ssh_runner.go:195] Run: rm -f paused
	I1219 03:11:06.441973  573699 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:11:06.446110  573699 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-86vsf" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:06.450837  573699 pod_ready.go:94] pod "coredns-66bc5c9577-86vsf" is "Ready"
	I1219 03:11:06.450868  573699 pod_ready.go:86] duration metric: took 4.729554ms for pod "coredns-66bc5c9577-86vsf" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:06.453222  573699 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-103644" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:06.457430  573699 pod_ready.go:94] pod "etcd-default-k8s-diff-port-103644" is "Ready"
	I1219 03:11:06.457451  573699 pod_ready.go:86] duration metric: took 4.204892ms for pod "etcd-default-k8s-diff-port-103644" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:06.459510  573699 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-103644" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:06.463733  573699 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-103644" is "Ready"
	I1219 03:11:06.463756  573699 pod_ready.go:86] duration metric: took 4.230488ms for pod "kube-apiserver-default-k8s-diff-port-103644" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:06.465771  573699 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-103644" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:06.846433  573699 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-103644" is "Ready"
	I1219 03:11:06.846461  573699 pod_ready.go:86] duration metric: took 380.664307ms for pod "kube-controller-manager-default-k8s-diff-port-103644" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:07.046474  573699 pod_ready.go:83] waiting for pod "kube-proxy-lgw6f" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:07.446485  573699 pod_ready.go:94] pod "kube-proxy-lgw6f" is "Ready"
	I1219 03:11:07.446515  573699 pod_ready.go:86] duration metric: took 400.010893ms for pod "kube-proxy-lgw6f" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:07.647551  573699 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-103644" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:08.046807  573699 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-103644" is "Ready"
	I1219 03:11:08.046840  573699 pod_ready.go:86] duration metric: took 399.227778ms for pod "kube-scheduler-default-k8s-diff-port-103644" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:08.046853  573699 pod_ready.go:40] duration metric: took 1.604833632s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:11:08.095708  573699 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 03:11:08.097778  573699 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-103644" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                       ATTEMPT             POD ID              POD                                                    NAMESPACE
	ac0a0c539d898       59f642f485d26       4 minutes ago       Running             kubernetes-dashboard-web   0                   9bc0dd014fef8       kubernetes-dashboard-web-5c9f966b98-bngtm              kubernetes-dashboard
	0909564061f06       6e38f40d628db       14 minutes ago      Running             storage-provisioner        2                   5152a417532c2       storage-provisioner                                    kube-system
	a6e170e632275       4921d7a6dffa9       15 minutes ago      Running             kindnet-cni                1                   5849334da0dca       kindnet-vgs5z                                          kube-system
	47a843aefeca9       36eef8e07bdd6       15 minutes ago      Running             kube-proxy                 1                   042d932b9f0bc       kube-proxy-lgw6f                                       kube-system
	58c1d664efdd8       52546a367cc9e       15 minutes ago      Running             coredns                    1                   970053a95c619       coredns-66bc5c9577-86vsf                               kube-system
	b836d490b5796       6e38f40d628db       15 minutes ago      Exited              storage-provisioner        1                   5152a417532c2       storage-provisioner                                    kube-system
	a9f9dbf5e77fc       56cc512116c8f       15 minutes ago      Running             busybox                    1                   231308711d724       busybox                                                default
	19baa8a9717c2       5826b25d990d7       15 minutes ago      Running             kube-controller-manager    1                   6f1ccb334eeac       kube-controller-manager-default-k8s-diff-port-103644   kube-system
	c2591b42ec56d       aec12dadf56dd       15 minutes ago      Running             kube-scheduler             1                   0074e46caf6e4       kube-scheduler-default-k8s-diff-port-103644            kube-system
	a8858dc4fe6cf       aa27095f56193       15 minutes ago      Running             kube-apiserver             1                   6fa84325f8005       kube-apiserver-default-k8s-diff-port-103644            kube-system
	fc945986f0d8b       a3e246e9556e9       15 minutes ago      Running             etcd                       1                   84008f4f31b82       etcd-default-k8s-diff-port-103644                      kube-system
	cedf8929206c7       56cc512116c8f       15 minutes ago      Exited              busybox                    0                   37532e04fd49d       busybox                                                default
	36e5d694c8907       52546a367cc9e       15 minutes ago      Exited              coredns                    0                   07e3ff0e5bdf5       coredns-66bc5c9577-86vsf                               kube-system
	72384f1ad49d7       4921d7a6dffa9       15 minutes ago      Exited              kindnet-cni                0                   30b833d027ad1       kindnet-vgs5z                                          kube-system
	872846ec96d2d       36eef8e07bdd6       15 minutes ago      Exited              kube-proxy                 0                   fd1cacdcce013       kube-proxy-lgw6f                                       kube-system
	dd57b66fad064       aec12dadf56dd       16 minutes ago      Exited              kube-scheduler             0                   448f7bd23d9ce       kube-scheduler-default-k8s-diff-port-103644            kube-system
	ee8c252f3d8f4       5826b25d990d7       16 minutes ago      Exited              kube-controller-manager    0                   b546ea1d48bb1       kube-controller-manager-default-k8s-diff-port-103644   kube-system
	069eca43bbcc0       aa27095f56193       16 minutes ago      Exited              kube-apiserver             0                   b8db3828b19c8       kube-apiserver-default-k8s-diff-port-103644            kube-system
	49ae9ae966417       a3e246e9556e9       16 minutes ago      Exited              etcd                       0                   b8649ca1f26b0       etcd-default-k8s-diff-port-103644                      kube-system
	
	
	==> containerd <==
	Dec 19 03:19:59 default-k8s-diff-port-103644 containerd[447]: time="2025-12-19T03:19:59.665423740Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4275d7c883d3f735b8de47264bc63415.slice/cri-containerd-fc945986f0d8b51799666a6eb31fb812b90a17842cc95aa6d352ef48b62f0652.scope/hugetlb.1GB.events\""
	Dec 19 03:19:59 default-k8s-diff-port-103644 containerd[447]: time="2025-12-19T03:19:59.666446956Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf12460c5_0196_4171_a44f_31b13af14f9f.slice/cri-containerd-0909564061f06a20d93762ff30a9ecf6d3d13e45691a15d168e1f4b7fa54779e.scope/hugetlb.2MB.events\""
	Dec 19 03:19:59 default-k8s-diff-port-103644 containerd[447]: time="2025-12-19T03:19:59.666561672Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf12460c5_0196_4171_a44f_31b13af14f9f.slice/cri-containerd-0909564061f06a20d93762ff30a9ecf6d3d13e45691a15d168e1f4b7fa54779e.scope/hugetlb.1GB.events\""
	Dec 19 03:19:59 default-k8s-diff-port-103644 containerd[447]: time="2025-12-19T03:19:59.667401177Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58de521f_3998_43e8_a935_3a43f0a176f8.slice/cri-containerd-ac0a0c539d898c7ef6dd6eaa2cea6e791bcd2c0c1e36683cfa823d0028b3751c.scope/hugetlb.2MB.events\""
	Dec 19 03:19:59 default-k8s-diff-port-103644 containerd[447]: time="2025-12-19T03:19:59.667497995Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58de521f_3998_43e8_a935_3a43f0a176f8.slice/cri-containerd-ac0a0c539d898c7ef6dd6eaa2cea6e791bcd2c0c1e36683cfa823d0028b3751c.scope/hugetlb.1GB.events\""
	Dec 19 03:20:09 default-k8s-diff-port-103644 containerd[447]: time="2025-12-19T03:20:09.683383359Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod996cf4b38188d4b0d664648ad2102013.slice/cri-containerd-a8858dc4fe6cf1222bb421499d0aaa734e7a15be6fd8f63ebd4a4f91de0515c1.scope/hugetlb.2MB.events\""
	Dec 19 03:20:09 default-k8s-diff-port-103644 containerd[447]: time="2025-12-19T03:20:09.683468619Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod996cf4b38188d4b0d664648ad2102013.slice/cri-containerd-a8858dc4fe6cf1222bb421499d0aaa734e7a15be6fd8f63ebd4a4f91de0515c1.scope/hugetlb.1GB.events\""
	Dec 19 03:20:09 default-k8s-diff-port-103644 containerd[447]: time="2025-12-19T03:20:09.684283446Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50f4d1ce4fca33a4531f882f5fb97a4e.slice/cri-containerd-c2591b42ec56d4e729eb5050bae8a11599a1d848e2b407c779bd43019d38acd7.scope/hugetlb.2MB.events\""
	Dec 19 03:20:09 default-k8s-diff-port-103644 containerd[447]: time="2025-12-19T03:20:09.684390209Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50f4d1ce4fca33a4531f882f5fb97a4e.slice/cri-containerd-c2591b42ec56d4e729eb5050bae8a11599a1d848e2b407c779bd43019d38acd7.scope/hugetlb.1GB.events\""
	Dec 19 03:20:09 default-k8s-diff-port-103644 containerd[447]: time="2025-12-19T03:20:09.685273361Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97d1d235_bad2_4304_8138_0d5f860d9a2a.slice/cri-containerd-a9f9dbf5e77fc449643d926d72a65bfee72a213de581f62f436a89bf5abae44a.scope/hugetlb.2MB.events\""
	Dec 19 03:20:09 default-k8s-diff-port-103644 containerd[447]: time="2025-12-19T03:20:09.685405649Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97d1d235_bad2_4304_8138_0d5f860d9a2a.slice/cri-containerd-a9f9dbf5e77fc449643d926d72a65bfee72a213de581f62f436a89bf5abae44a.scope/hugetlb.1GB.events\""
	Dec 19 03:20:09 default-k8s-diff-port-103644 containerd[447]: time="2025-12-19T03:20:09.686261180Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2b924f3_ac71_431b_a3e6_f85f1e0b94e6.slice/cri-containerd-58c1d664efdd8684b61585de1ce35b1c3bc4e2857602c929dee1f70db16c68e0.scope/hugetlb.2MB.events\""
	Dec 19 03:20:09 default-k8s-diff-port-103644 containerd[447]: time="2025-12-19T03:20:09.686385749Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2b924f3_ac71_431b_a3e6_f85f1e0b94e6.slice/cri-containerd-58c1d664efdd8684b61585de1ce35b1c3bc4e2857602c929dee1f70db16c68e0.scope/hugetlb.1GB.events\""
	Dec 19 03:20:09 default-k8s-diff-port-103644 containerd[447]: time="2025-12-19T03:20:09.687329693Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-pod3a78062f_cab2_4e56_bc36_33ecf9505255.slice/cri-containerd-a6e170e632275e1120bb398e83b22120c4c7eb49866f53c50f5736a071087f45.scope/hugetlb.2MB.events\""
	Dec 19 03:20:09 default-k8s-diff-port-103644 containerd[447]: time="2025-12-19T03:20:09.687424633Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-pod3a78062f_cab2_4e56_bc36_33ecf9505255.slice/cri-containerd-a6e170e632275e1120bb398e83b22120c4c7eb49866f53c50f5736a071087f45.scope/hugetlb.1GB.events\""
	Dec 19 03:20:09 default-k8s-diff-port-103644 containerd[447]: time="2025-12-19T03:20:09.688259440Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac53bb8a0832eefbaa4a648be6aad901.slice/cri-containerd-19baa8a9717c29181666ffb3efdceea762fabc94b416863c5b53dc5a31a91c9c.scope/hugetlb.2MB.events\""
	Dec 19 03:20:09 default-k8s-diff-port-103644 containerd[447]: time="2025-12-19T03:20:09.688364452Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac53bb8a0832eefbaa4a648be6aad901.slice/cri-containerd-19baa8a9717c29181666ffb3efdceea762fabc94b416863c5b53dc5a31a91c9c.scope/hugetlb.1GB.events\""
	Dec 19 03:20:09 default-k8s-diff-port-103644 containerd[447]: time="2025-12-19T03:20:09.689047662Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b4461b1_0b30_427d_9e31_107cea049612.slice/cri-containerd-47a843aefeca97fb22cc246b51d4c45d4468c52e15b42a86d187a0f0219b93c1.scope/hugetlb.2MB.events\""
	Dec 19 03:20:09 default-k8s-diff-port-103644 containerd[447]: time="2025-12-19T03:20:09.689131165Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b4461b1_0b30_427d_9e31_107cea049612.slice/cri-containerd-47a843aefeca97fb22cc246b51d4c45d4468c52e15b42a86d187a0f0219b93c1.scope/hugetlb.1GB.events\""
	Dec 19 03:20:09 default-k8s-diff-port-103644 containerd[447]: time="2025-12-19T03:20:09.689981476Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4275d7c883d3f735b8de47264bc63415.slice/cri-containerd-fc945986f0d8b51799666a6eb31fb812b90a17842cc95aa6d352ef48b62f0652.scope/hugetlb.2MB.events\""
	Dec 19 03:20:09 default-k8s-diff-port-103644 containerd[447]: time="2025-12-19T03:20:09.690087334Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4275d7c883d3f735b8de47264bc63415.slice/cri-containerd-fc945986f0d8b51799666a6eb31fb812b90a17842cc95aa6d352ef48b62f0652.scope/hugetlb.1GB.events\""
	Dec 19 03:20:09 default-k8s-diff-port-103644 containerd[447]: time="2025-12-19T03:20:09.691006707Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf12460c5_0196_4171_a44f_31b13af14f9f.slice/cri-containerd-0909564061f06a20d93762ff30a9ecf6d3d13e45691a15d168e1f4b7fa54779e.scope/hugetlb.2MB.events\""
	Dec 19 03:20:09 default-k8s-diff-port-103644 containerd[447]: time="2025-12-19T03:20:09.691122772Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf12460c5_0196_4171_a44f_31b13af14f9f.slice/cri-containerd-0909564061f06a20d93762ff30a9ecf6d3d13e45691a15d168e1f4b7fa54779e.scope/hugetlb.1GB.events\""
	Dec 19 03:20:09 default-k8s-diff-port-103644 containerd[447]: time="2025-12-19T03:20:09.691887511Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58de521f_3998_43e8_a935_3a43f0a176f8.slice/cri-containerd-ac0a0c539d898c7ef6dd6eaa2cea6e791bcd2c0c1e36683cfa823d0028b3751c.scope/hugetlb.2MB.events\""
	Dec 19 03:20:09 default-k8s-diff-port-103644 containerd[447]: time="2025-12-19T03:20:09.691979309Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58de521f_3998_43e8_a935_3a43f0a176f8.slice/cri-containerd-ac0a0c539d898c7ef6dd6eaa2cea6e791bcd2c0c1e36683cfa823d0028b3751c.scope/hugetlb.1GB.events\""
	
	
	==> coredns [36e5d694c8907189486901b0aad40fae056b856f62180e718acb50ce029ecd0d] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38780 - 14298 "HINFO IN 3502738313717446473.3594976055449755558. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.04275935s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [58c1d664efdd8684b61585de1ce35b1c3bc4e2857602c929dee1f70db16c68e0] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39833 - 46591 "HINFO IN 7296903648635083896.2998695300198609950. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.062731195s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-103644
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-103644
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=default-k8s-diff-port-103644
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T03_04_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 03:04:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-103644
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 03:20:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 03:19:18 +0000   Fri, 19 Dec 2025 03:04:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 03:19:18 +0000   Fri, 19 Dec 2025 03:04:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 03:19:18 +0000   Fri, 19 Dec 2025 03:04:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 03:19:18 +0000   Fri, 19 Dec 2025 03:04:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    default-k8s-diff-port-103644
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                ecfbcbac-fe01-4091-9f52-5962521dd868
	  Boot ID:                    a0dec9bb-d63c-4dc5-9036-bbcaf9f2c6be
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.2.0
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 coredns-66bc5c9577-86vsf                                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     16m
	  kube-system                 etcd-default-k8s-diff-port-103644                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         16m
	  kube-system                 kindnet-vgs5z                                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      16m
	  kube-system                 kube-apiserver-default-k8s-diff-port-103644              250m (3%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-103644     200m (2%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-lgw6f                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-default-k8s-diff-port-103644              100m (1%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 metrics-server-746fcd58dc-tctv8                          100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         15m
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kubernetes-dashboard        kubernetes-dashboard-api-b9fbd5f9b-dpv56                 100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     15m
	  kubernetes-dashboard        kubernetes-dashboard-auth-85fbf6f9bb-jzn2l               100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     15m
	  kubernetes-dashboard        kubernetes-dashboard-kong-9849c64bd-k2snn                0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-7685fd8b77-jg975    100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     15m
	  kubernetes-dashboard        kubernetes-dashboard-web-5c9f966b98-bngtm                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (16%)  1100m (13%)
	  memory             1220Mi (3%)  1820Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  Starting                 15m                kube-proxy       
	  Normal  NodeHasSufficientPID     16m                kubelet          Node default-k8s-diff-port-103644 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node default-k8s-diff-port-103644 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node default-k8s-diff-port-103644 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           16m                node-controller  Node default-k8s-diff-port-103644 event: Registered Node default-k8s-diff-port-103644 in Controller
	  Normal  NodeReady                15m                kubelet          Node default-k8s-diff-port-103644 status is now: NodeReady
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x9 over 15m)  kubelet          Node default-k8s-diff-port-103644 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x7 over 15m)  kubelet          Node default-k8s-diff-port-103644 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node default-k8s-diff-port-103644 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15m                node-controller  Node default-k8s-diff-port-103644 event: Registered Node default-k8s-diff-port-103644 in Controller
	
	
	==> dmesg <==
	[Dec19 01:17] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001886] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.395482] i8042: Warning: Keylock active
	[  +0.012710] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497460] block sda: the capability attribute has been deprecated.
	[  +0.080392] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.020963] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.499240] kauditd_printk_skb: 47 callbacks suppressed
	[Dec19 03:03] overlayfs: failed to resolve '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/29/fs': -2
	
	
	==> etcd [49ae9ae9664179746d1cc4f0b2904783aa2c4b1e268ce918bf8eb4eec3c61233] <==
	{"level":"warn","ts":"2025-12-19T03:04:01.606669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:01.618756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:01.626680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:01.634379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:01.641184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:01.647980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:01.655487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:01.662426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:01.671215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:01.678050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:01.684701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:01.691898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:01.698702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:01.705237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:01.711908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:01.719836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:01.728041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:01.737149Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:01.745905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:01.753216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:01.760860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:01.779738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:01.787000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:01.794957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:01.851235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37240","server-name":"","error":"EOF"}
	
	
	==> etcd [fc945986f0d8b51799666a6eb31fb812b90a17842cc95aa6d352ef48b62f0652] <==
	{"level":"warn","ts":"2025-12-19T03:04:59.753194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:59.762425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:59.770854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:59.779904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:59.795473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:59.802792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:59.810499Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:59.881417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:03.695955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:03.722151Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:03.738895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:03.751177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:33.827689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:33.865441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:33.884527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:33.897782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:33.920481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:33.940127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:33.956785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46072","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-19T03:14:59.205849Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1294}
	{"level":"info","ts":"2025-12-19T03:14:59.227199Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1294,"took":"20.97744ms","hash":3936518112,"current-db-size-bytes":4775936,"current-db-size":"4.8 MB","current-db-size-in-use-bytes":2220032,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2025-12-19T03:14:59.227265Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3936518112,"revision":1294,"compact-revision":-1}
	{"level":"info","ts":"2025-12-19T03:19:59.209702Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1572}
	{"level":"info","ts":"2025-12-19T03:19:59.212262Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1572,"took":"2.190137ms","hash":2394319790,"current-db-size-bytes":4775936,"current-db-size":"4.8 MB","current-db-size-in-use-bytes":2347008,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2025-12-19T03:19:59.212295Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2394319790,"revision":1572,"compact-revision":1294}
	
	
	==> kernel <==
	 03:20:10 up  2:02,  0 user,  load average: 0.78, 0.70, 3.84
	Linux default-k8s-diff-port-103644 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [72384f1ad49d78ec6df6f3d6d752884b9f63349eb0cf50bb752be26fdba3141d] <==
	I1219 03:04:14.534663       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1219 03:04:14.534971       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1219 03:04:14.535141       1 main.go:148] setting mtu 1500 for CNI 
	I1219 03:04:14.535168       1 main.go:178] kindnetd IP family: "ipv4"
	I1219 03:04:14.535205       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-19T03:04:14Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1219 03:04:14.754984       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1219 03:04:14.755030       1 controller.go:381] "Waiting for informer caches to sync"
	I1219 03:04:14.755058       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1219 03:04:14.755213       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1219 03:04:15.355607       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1219 03:04:15.355657       1 metrics.go:72] Registering metrics
	I1219 03:04:15.355726       1 controller.go:711] "Syncing nftables rules"
	I1219 03:04:24.830782       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:04:24.830850       1 main.go:301] handling current node
	I1219 03:04:34.830919       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:04:34.830978       1 main.go:301] handling current node
	
	
	==> kindnet [a6e170e632275e1120bb398e83b22120c4c7eb49866f53c50f5736a071087f45] <==
	I1219 03:18:02.090014       1 main.go:301] handling current node
	I1219 03:18:12.089417       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:18:12.089450       1 main.go:301] handling current node
	I1219 03:18:22.090617       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:18:22.090669       1 main.go:301] handling current node
	I1219 03:18:32.096617       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:18:32.096652       1 main.go:301] handling current node
	I1219 03:18:42.095887       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:18:42.095927       1 main.go:301] handling current node
	I1219 03:18:52.088376       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:18:52.088423       1 main.go:301] handling current node
	I1219 03:19:02.089835       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:19:02.089878       1 main.go:301] handling current node
	I1219 03:19:12.088762       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:19:12.088797       1 main.go:301] handling current node
	I1219 03:19:22.088118       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:19:22.088173       1 main.go:301] handling current node
	I1219 03:19:32.092778       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:19:32.092840       1 main.go:301] handling current node
	I1219 03:19:42.087690       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:19:42.087728       1 main.go:301] handling current node
	I1219 03:19:52.088711       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:19:52.088754       1 main.go:301] handling current node
	I1219 03:20:02.088357       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:20:02.088398       1 main.go:301] handling current node
	
	
	==> kube-apiserver [069eca43bbcc0eb20a0e387dc92839a2dc811dd9acdf65cf2e9fe7389f32d3cd] <==
	I1219 03:04:05.334655       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1219 03:04:05.344666       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1219 03:04:10.473953       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1219 03:04:10.478423       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1219 03:04:10.615157       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1219 03:04:10.620643       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1219 03:04:36.661994       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8444->192.168.94.1:35430: use of closed network connection
	I1219 03:04:37.342960       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1219 03:04:37.346399       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:04:37.346459       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1219 03:04:37.346512       1 handler_proxy.go:143] error resolving kube-system/metrics-server: service "metrics-server" not found
	I1219 03:04:37.415122       1 alloc.go:328] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.101.92.168"}
	W1219 03:04:37.420925       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:04:37.420991       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1219 03:04:37.426248       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:04:37.426304       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	
	
	==> kube-apiserver [a8858dc4fe6cf1222bb421499d0aaa734e7a15be6fd8f63ebd4a4f91de0515c1] <==
	 > logger="UnhandledError"
	I1219 03:16:01.382252       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:18:01.381503       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:18:01.381559       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1219 03:18:01.381573       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:18:01.382668       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:18:01.382763       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1219 03:18:01.382781       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:20:00.385774       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:20:00.385883       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1219 03:20:01.387063       1 handler_proxy.go:99] no RequestInfo found in the context
	W1219 03:20:01.387126       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:20:01.387180       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1219 03:20:01.387204       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1219 03:20:01.387181       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1219 03:20:01.388376       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [19baa8a9717c29181666ffb3efdceea762fabc94b416863c5b53dc5a31a91c9c] <==
	I1219 03:14:05.163649       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:14:35.075641       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:14:35.171246       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:15:05.080395       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:15:05.179565       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:15:35.085238       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:15:35.187236       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:16:05.090436       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:16:05.195425       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:16:35.094680       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:16:35.202353       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:17:05.099272       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:17:05.210099       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:17:35.103812       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:17:35.217863       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:18:05.108991       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:18:05.225323       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:18:35.113007       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:18:35.232571       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:19:05.117766       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:19:05.240448       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:19:35.122105       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:19:35.248192       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:20:05.126875       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:20:05.256013       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-controller-manager [ee8c252f3d8f4aee54318214731e5386b3c089ad31c19108f2f01301f0698503] <==
	I1219 03:04:09.518750       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1219 03:04:09.518772       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1219 03:04:09.518714       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1219 03:04:09.518844       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1219 03:04:09.519026       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1219 03:04:09.520319       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1219 03:04:09.520450       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1219 03:04:09.521176       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1219 03:04:09.521198       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1219 03:04:09.522757       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1219 03:04:09.522886       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1219 03:04:09.522951       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1219 03:04:09.522995       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1219 03:04:09.523002       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1219 03:04:09.523032       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1219 03:04:09.525792       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1219 03:04:09.525891       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 03:04:09.531004       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-103644" podCIDRs=["10.244.0.0/24"]
	I1219 03:04:09.532016       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 03:04:09.534268       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1219 03:04:09.534391       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1219 03:04:09.534495       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-103644"
	I1219 03:04:09.534569       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1219 03:04:09.544075       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 03:04:29.536364       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [47a843aefeca97fb22cc246b51d4c45d4468c52e15b42a86d187a0f0219b93c1] <==
	I1219 03:05:01.517369       1 server_linux.go:53] "Using iptables proxy"
	I1219 03:05:01.589222       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1219 03:05:01.690045       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1219 03:05:01.690103       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1219 03:05:01.690217       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 03:05:01.722003       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1219 03:05:01.722073       1 server_linux.go:132] "Using iptables Proxier"
	I1219 03:05:01.730736       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:05:01.731726       1 server.go:527] "Version info" version="v1.34.3"
	I1219 03:05:01.731879       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:05:01.739465       1 config.go:200] "Starting service config controller"
	I1219 03:05:01.739484       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:05:01.739503       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:05:01.739507       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:05:01.739522       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:05:01.739526       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:05:01.739641       1 config.go:309] "Starting node config controller"
	I1219 03:05:01.739660       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:05:01.739669       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 03:05:01.840105       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1219 03:05:01.840105       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 03:05:01.840164       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [872846ec96d2d949d4a7f1a644718c7d0f80e0e28fcde0d7425648a2ffc89358] <==
	I1219 03:04:11.251855       1 server_linux.go:53] "Using iptables proxy"
	I1219 03:04:11.333860       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1219 03:04:11.434180       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1219 03:04:11.434222       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1219 03:04:11.434338       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 03:04:11.457457       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1219 03:04:11.457519       1 server_linux.go:132] "Using iptables Proxier"
	I1219 03:04:11.463613       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:04:11.464075       1 server.go:527] "Version info" version="v1.34.3"
	I1219 03:04:11.464128       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:04:11.465604       1 config.go:200] "Starting service config controller"
	I1219 03:04:11.465683       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:04:11.465703       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:04:11.465727       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:04:11.465758       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:04:11.465766       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:04:11.465808       1 config.go:309] "Starting node config controller"
	I1219 03:04:11.465820       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:04:11.565945       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 03:04:11.565947       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 03:04:11.565992       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1219 03:04:11.565947       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [c2591b42ec56d4e729eb5050bae8a11599a1d848e2b407c779bd43019d38acd7] <==
	I1219 03:04:59.843962       1 serving.go:386] Generated self-signed cert in-memory
	I1219 03:05:01.342682       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1219 03:05:01.342721       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:05:01.349758       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:05:01.349809       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:05:01.349953       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1219 03:05:01.349977       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1219 03:05:01.350067       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1219 03:05:01.350271       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1219 03:05:01.350665       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1219 03:05:01.350757       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1219 03:05:01.450325       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:05:01.450476       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1219 03:05:01.453128       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kube-scheduler [dd57b66fad064bb42b78fb88e0736406dd552174022391793749a16c31b46525] <==
	E1219 03:04:02.553057       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1219 03:04:02.553110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1219 03:04:02.553135       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1219 03:04:02.553343       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1219 03:04:02.553704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1219 03:04:02.553761       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1219 03:04:02.554155       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1219 03:04:02.554656       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1219 03:04:02.554658       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1219 03:04:02.554761       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1219 03:04:02.554859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1219 03:04:02.555197       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1219 03:04:03.436712       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1219 03:04:03.486188       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1219 03:04:03.545873       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1219 03:04:03.577212       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1219 03:04:03.612471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1219 03:04:03.655998       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1219 03:04:03.678451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1219 03:04:03.678451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1219 03:04:03.684113       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1219 03:04:03.733392       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1219 03:04:03.777812       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1219 03:04:03.848470       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1219 03:04:05.447006       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 19 03:19:06 default-k8s-diff-port-103644 kubelet[593]: E1219 03:19:06.495286     593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-api\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-api/manifests/sha256:96a702cfd3399d9eba23b3d37b09f798a4f51fcd8c8dfa8552c7829ade9c4aff: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-api-b9fbd5f9b-dpv56" podUID="7d41d5bd-26f4-4810-b588-5a7f49565a91"
	Dec 19 03:19:08 default-k8s-diff-port-103644 kubelet[593]: E1219 03:19:08.495712     593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-tctv8" podUID="37ff7895-b382-407b-9032-56a428173579"
	Dec 19 03:19:10 default-k8s-diff-port-103644 kubelet[593]: E1219 03:19:10.495214     593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-auth\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-auth:1.4.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-auth:1.4.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-auth/manifests/sha256:53e9917898bf98ff2de91f7f9bdedd3545780eb3ac72158889ae031136e9eeff: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-auth-85fbf6f9bb-jzn2l" podUID="26a71141-ff39-402c-8a96-699c64278554"
	Dec 19 03:19:13 default-k8s-diff-port-103644 kubelet[593]: E1219 03:19:13.495655     593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"clear-stale-pid\" with ImagePullBackOff: \"Back-off pulling image \\\"kong:3.9\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/kong:3.9\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/kong/manifests/sha256:4379444ecfd82794b27de38a74ba540e8571683dfdfce74c8ecb4018f308fb29: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-k2snn" podUID="fa13a9b2-1403-45ab-a6ce-a4ca11c18da3"
	Dec 19 03:19:18 default-k8s-diff-port-103644 kubelet[593]: E1219 03:19:18.494966     593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-metrics-scraper/manifests/sha256:5154b68252bd601cf85092b6413cb9db224af1ef89cb53009d2070dfccd30775: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-jg975" podUID="000469ce-a882-41a1-ae64-b4e77c5b0f26"
	Dec 19 03:19:19 default-k8s-diff-port-103644 kubelet[593]: E1219 03:19:19.494760     593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-api\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-api/manifests/sha256:96a702cfd3399d9eba23b3d37b09f798a4f51fcd8c8dfa8552c7829ade9c4aff: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-api-b9fbd5f9b-dpv56" podUID="7d41d5bd-26f4-4810-b588-5a7f49565a91"
	Dec 19 03:19:23 default-k8s-diff-port-103644 kubelet[593]: E1219 03:19:23.495568     593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-tctv8" podUID="37ff7895-b382-407b-9032-56a428173579"
	Dec 19 03:19:25 default-k8s-diff-port-103644 kubelet[593]: E1219 03:19:25.495611     593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-auth\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-auth:1.4.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-auth:1.4.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-auth/manifests/sha256:53e9917898bf98ff2de91f7f9bdedd3545780eb3ac72158889ae031136e9eeff: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-auth-85fbf6f9bb-jzn2l" podUID="26a71141-ff39-402c-8a96-699c64278554"
	Dec 19 03:19:26 default-k8s-diff-port-103644 kubelet[593]: E1219 03:19:26.495377     593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"clear-stale-pid\" with ImagePullBackOff: \"Back-off pulling image \\\"kong:3.9\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/kong:3.9\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/kong/manifests/sha256:4379444ecfd82794b27de38a74ba540e8571683dfdfce74c8ecb4018f308fb29: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-k2snn" podUID="fa13a9b2-1403-45ab-a6ce-a4ca11c18da3"
	Dec 19 03:19:30 default-k8s-diff-port-103644 kubelet[593]: E1219 03:19:30.495175     593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-metrics-scraper/manifests/sha256:5154b68252bd601cf85092b6413cb9db224af1ef89cb53009d2070dfccd30775: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-jg975" podUID="000469ce-a882-41a1-ae64-b4e77c5b0f26"
	Dec 19 03:19:33 default-k8s-diff-port-103644 kubelet[593]: E1219 03:19:33.495456     593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-api\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-api/manifests/sha256:96a702cfd3399d9eba23b3d37b09f798a4f51fcd8c8dfa8552c7829ade9c4aff: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-api-b9fbd5f9b-dpv56" podUID="7d41d5bd-26f4-4810-b588-5a7f49565a91"
	Dec 19 03:19:34 default-k8s-diff-port-103644 kubelet[593]: E1219 03:19:34.494975     593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-tctv8" podUID="37ff7895-b382-407b-9032-56a428173579"
	Dec 19 03:19:40 default-k8s-diff-port-103644 kubelet[593]: E1219 03:19:40.495469     593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-auth\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-auth:1.4.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-auth:1.4.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-auth/manifests/sha256:53e9917898bf98ff2de91f7f9bdedd3545780eb3ac72158889ae031136e9eeff: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-auth-85fbf6f9bb-jzn2l" podUID="26a71141-ff39-402c-8a96-699c64278554"
	Dec 19 03:19:41 default-k8s-diff-port-103644 kubelet[593]: E1219 03:19:41.494715     593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-metrics-scraper/manifests/sha256:5154b68252bd601cf85092b6413cb9db224af1ef89cb53009d2070dfccd30775: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-jg975" podUID="000469ce-a882-41a1-ae64-b4e77c5b0f26"
	Dec 19 03:19:41 default-k8s-diff-port-103644 kubelet[593]: E1219 03:19:41.494806     593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"clear-stale-pid\" with ImagePullBackOff: \"Back-off pulling image \\\"kong:3.9\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/kong:3.9\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/kong/manifests/sha256:4379444ecfd82794b27de38a74ba540e8571683dfdfce74c8ecb4018f308fb29: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-k2snn" podUID="fa13a9b2-1403-45ab-a6ce-a4ca11c18da3"
	Dec 19 03:19:47 default-k8s-diff-port-103644 kubelet[593]: E1219 03:19:47.495796     593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-api\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-api/manifests/sha256:96a702cfd3399d9eba23b3d37b09f798a4f51fcd8c8dfa8552c7829ade9c4aff: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-api-b9fbd5f9b-dpv56" podUID="7d41d5bd-26f4-4810-b588-5a7f49565a91"
	Dec 19 03:19:47 default-k8s-diff-port-103644 kubelet[593]: E1219 03:19:47.495929     593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-tctv8" podUID="37ff7895-b382-407b-9032-56a428173579"
	Dec 19 03:19:52 default-k8s-diff-port-103644 kubelet[593]: E1219 03:19:52.495693     593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"clear-stale-pid\" with ImagePullBackOff: \"Back-off pulling image \\\"kong:3.9\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/kong:3.9\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/kong/manifests/sha256:4379444ecfd82794b27de38a74ba540e8571683dfdfce74c8ecb4018f308fb29: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-k2snn" podUID="fa13a9b2-1403-45ab-a6ce-a4ca11c18da3"
	Dec 19 03:19:55 default-k8s-diff-port-103644 kubelet[593]: E1219 03:19:55.495624     593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-metrics-scraper/manifests/sha256:5154b68252bd601cf85092b6413cb9db224af1ef89cb53009d2070dfccd30775: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-jg975" podUID="000469ce-a882-41a1-ae64-b4e77c5b0f26"
	Dec 19 03:19:55 default-k8s-diff-port-103644 kubelet[593]: E1219 03:19:55.495673     593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-auth\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-auth:1.4.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-auth:1.4.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-auth/manifests/sha256:53e9917898bf98ff2de91f7f9bdedd3545780eb3ac72158889ae031136e9eeff: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-auth-85fbf6f9bb-jzn2l" podUID="26a71141-ff39-402c-8a96-699c64278554"
	Dec 19 03:20:00 default-k8s-diff-port-103644 kubelet[593]: E1219 03:20:00.494651     593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-api\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-api/manifests/sha256:96a702cfd3399d9eba23b3d37b09f798a4f51fcd8c8dfa8552c7829ade9c4aff: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-api-b9fbd5f9b-dpv56" podUID="7d41d5bd-26f4-4810-b588-5a7f49565a91"
	Dec 19 03:20:02 default-k8s-diff-port-103644 kubelet[593]: E1219 03:20:02.495433     593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-tctv8" podUID="37ff7895-b382-407b-9032-56a428173579"
	Dec 19 03:20:05 default-k8s-diff-port-103644 kubelet[593]: E1219 03:20:05.495691     593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"clear-stale-pid\" with ImagePullBackOff: \"Back-off pulling image \\\"kong:3.9\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/kong:3.9\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/kong/manifests/sha256:4379444ecfd82794b27de38a74ba540e8571683dfdfce74c8ecb4018f308fb29: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-k2snn" podUID="fa13a9b2-1403-45ab-a6ce-a4ca11c18da3"
	Dec 19 03:20:09 default-k8s-diff-port-103644 kubelet[593]: E1219 03:20:09.495114     593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-auth\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-auth:1.4.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-auth:1.4.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-auth/manifests/sha256:53e9917898bf98ff2de91f7f9bdedd3545780eb3ac72158889ae031136e9eeff: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-auth-85fbf6f9bb-jzn2l" podUID="26a71141-ff39-402c-8a96-699c64278554"
	Dec 19 03:20:09 default-k8s-diff-port-103644 kubelet[593]: E1219 03:20:09.495125     593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-metrics-scraper/manifests/sha256:5154b68252bd601cf85092b6413cb9db224af1ef89cb53009d2070dfccd30775: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-jg975" podUID="000469ce-a882-41a1-ae64-b4e77c5b0f26"
	
	
	==> kubernetes-dashboard [ac0a0c539d898c7ef6dd6eaa2cea6e791bcd2c0c1e36683cfa823d0028b3751c] <==
	I1219 03:16:04.732516       1 main.go:37] "Starting Kubernetes Dashboard Web" version="1.7.0"
	I1219 03:16:04.732599       1 init.go:48] Using in-cluster config
	I1219 03:16:04.732829       1 main.go:57] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> storage-provisioner [0909564061f06a20d93762ff30a9ecf6d3d13e45691a15d168e1f4b7fa54779e] <==
	W1219 03:19:45.281188       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:47.284063       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:47.287773       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:49.290440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:49.294066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:51.297641       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:51.301265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:53.305035       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:53.310415       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:55.314205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:55.318208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:57.321450       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:57.326024       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:59.328973       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:19:59.332487       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:20:01.336052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:20:01.341050       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:20:03.344131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:20:03.347917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:20:05.350941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:20:05.354913       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:20:07.357750       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:20:07.361417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:20:09.364908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:20:09.369639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [b836d490b57969a785b22ae7a7c6bfd0c9e0d003c578aa06dbd2415b8ef44317] <==
	I1219 03:05:01.336234       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1219 03:05:31.340069       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-103644 -n default-k8s-diff-port-103644
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-103644 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: metrics-server-746fcd58dc-tctv8 kubernetes-dashboard-api-b9fbd5f9b-dpv56 kubernetes-dashboard-auth-85fbf6f9bb-jzn2l kubernetes-dashboard-kong-9849c64bd-k2snn kubernetes-dashboard-metrics-scraper-7685fd8b77-jg975
helpers_test.go:283: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context default-k8s-diff-port-103644 describe pod metrics-server-746fcd58dc-tctv8 kubernetes-dashboard-api-b9fbd5f9b-dpv56 kubernetes-dashboard-auth-85fbf6f9bb-jzn2l kubernetes-dashboard-kong-9849c64bd-k2snn kubernetes-dashboard-metrics-scraper-7685fd8b77-jg975
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-103644 describe pod metrics-server-746fcd58dc-tctv8 kubernetes-dashboard-api-b9fbd5f9b-dpv56 kubernetes-dashboard-auth-85fbf6f9bb-jzn2l kubernetes-dashboard-kong-9849c64bd-k2snn kubernetes-dashboard-metrics-scraper-7685fd8b77-jg975: exit status 1 (66.861713ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-tctv8" not found
	Error from server (NotFound): pods "kubernetes-dashboard-api-b9fbd5f9b-dpv56" not found
	Error from server (NotFound): pods "kubernetes-dashboard-auth-85fbf6f9bb-jzn2l" not found
	Error from server (NotFound): pods "kubernetes-dashboard-kong-9849c64bd-k2snn" not found
	Error from server (NotFound): pods "kubernetes-dashboard-metrics-scraper-7685fd8b77-jg975" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context default-k8s-diff-port-103644 describe pod metrics-server-746fcd58dc-tctv8 kubernetes-dashboard-api-b9fbd5f9b-dpv56 kubernetes-dashboard-auth-85fbf6f9bb-jzn2l kubernetes-dashboard-kong-9849c64bd-k2snn kubernetes-dashboard-metrics-scraper-7685fd8b77-jg975: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (543.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1219 03:16:19.191022  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-453239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:18:12.809720  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/addons-367973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:18:39.193738  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-180941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:285: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-002036 -n old-k8s-version-002036
start_stop_delete_test.go:285: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-12-19 03:23:26.49477758 +0000 UTC m=+3483.487901781
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-002036 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context old-k8s-version-002036 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (68.044202ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "dashboard-metrics-scraper" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-002036 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-002036
helpers_test.go:244: (dbg) docker inspect old-k8s-version-002036:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9a09c191febd877b5a9d188d5a58ce6e4a4f355029b8660f49243998b1fd98b7",
	        "Created": "2025-12-19T03:03:20.787101116Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 566921,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-19T03:04:34.927054149Z",
	            "FinishedAt": "2025-12-19T03:04:34.001324871Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/9a09c191febd877b5a9d188d5a58ce6e4a4f355029b8660f49243998b1fd98b7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9a09c191febd877b5a9d188d5a58ce6e4a4f355029b8660f49243998b1fd98b7/hostname",
	        "HostsPath": "/var/lib/docker/containers/9a09c191febd877b5a9d188d5a58ce6e4a4f355029b8660f49243998b1fd98b7/hosts",
	        "LogPath": "/var/lib/docker/containers/9a09c191febd877b5a9d188d5a58ce6e4a4f355029b8660f49243998b1fd98b7/9a09c191febd877b5a9d188d5a58ce6e4a4f355029b8660f49243998b1fd98b7-json.log",
	        "Name": "/old-k8s-version-002036",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-002036:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-002036",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9a09c191febd877b5a9d188d5a58ce6e4a4f355029b8660f49243998b1fd98b7",
	                "LowerDir": "/var/lib/docker/overlay2/3805bfc7eeb1f171cb9e9dcde5558afa9342b710903f2547fe64c0b26d0ee151-init/diff:/var/lib/docker/overlay2/68e8325308c9e4650215fd35d4b00e1f54e6ac5929641a1bc8ed2d512448afbd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3805bfc7eeb1f171cb9e9dcde5558afa9342b710903f2547fe64c0b26d0ee151/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3805bfc7eeb1f171cb9e9dcde5558afa9342b710903f2547fe64c0b26d0ee151/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3805bfc7eeb1f171cb9e9dcde5558afa9342b710903f2547fe64c0b26d0ee151/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-002036",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-002036/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-002036",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-002036",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-002036",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "b1b417730d31ff86348063da70554761802e2e18a90604463935ed127c8d369f",
	            "SandboxKey": "/var/run/docker/netns/b1b417730d31",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-002036": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d0625f500f89a10f1e85ab1719542921f7a0ad6e299e9584edf6d3813be5348f",
	                    "EndpointID": "b90e4a6ec14c36b274745380d6bffb986a2ad735ad97876cecca6fc84bbde272",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "3a:c9:8b:db:76:54",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-002036",
	                        "9a09c191febd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-002036 -n old-k8s-version-002036
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-002036 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-002036 logs -n 25: (1.638042241s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ delete  │ -p cert-options-967008                                                                                                                                                                                                                              │ cert-options-967008          │ jenkins │ v1.37.0 │ 19 Dec 25 03:03 UTC │ 19 Dec 25 03:03 UTC │
	│ start   │ -p NoKubernetes-821572 --driver=docker  --container-runtime=containerd                                                                                                                                                                              │ NoKubernetes-821572          │ jenkins │ v1.37.0 │ 19 Dec 25 03:03 UTC │ 19 Dec 25 03:03 UTC │
	│ start   │ -p no-preload-208281 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-208281            │ jenkins │ v1.37.0 │ 19 Dec 25 03:03 UTC │ 19 Dec 25 03:04 UTC │
	│ delete  │ -p kubernetes-upgrade-340572                                                                                                                                                                                                                        │ kubernetes-upgrade-340572    │ jenkins │ v1.37.0 │ 19 Dec 25 03:03 UTC │ 19 Dec 25 03:03 UTC │
	│ start   │ -p embed-certs-536489 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                        │ embed-certs-536489           │ jenkins │ v1.37.0 │ 19 Dec 25 03:03 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p NoKubernetes-821572 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-821572          │ jenkins │ v1.37.0 │ 19 Dec 25 03:03 UTC │                     │
	│ delete  │ -p NoKubernetes-821572                                                                                                                                                                                                                              │ NoKubernetes-821572          │ jenkins │ v1.37.0 │ 19 Dec 25 03:03 UTC │ 19 Dec 25 03:03 UTC │
	│ delete  │ -p disable-driver-mounts-443690                                                                                                                                                                                                                     │ disable-driver-mounts-443690 │ jenkins │ v1.37.0 │ 19 Dec 25 03:03 UTC │ 19 Dec 25 03:03 UTC │
	│ start   │ -p default-k8s-diff-port-103644 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-103644 │ jenkins │ v1.37.0 │ 19 Dec 25 03:03 UTC │ 19 Dec 25 03:04 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-002036 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-002036       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ stop    │ -p old-k8s-version-002036 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-002036       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ addons  │ enable metrics-server -p embed-certs-536489 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ embed-certs-536489           │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ stop    │ -p embed-certs-536489 --alsologtostderr -v=3                                                                                                                                                                                                        │ embed-certs-536489           │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ addons  │ enable metrics-server -p no-preload-208281 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-208281            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ stop    │ -p no-preload-208281 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-208281            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-002036 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-002036       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p old-k8s-version-002036 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-002036       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-103644 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ default-k8s-diff-port-103644 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ stop    │ -p default-k8s-diff-port-103644 --alsologtostderr -v=3                                                                                                                                                                                              │ default-k8s-diff-port-103644 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ addons  │ enable dashboard -p embed-certs-536489 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-536489           │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p embed-certs-536489 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                        │ embed-certs-536489           │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:10 UTC │
	│ addons  │ enable dashboard -p no-preload-208281 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-208281            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p no-preload-208281 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-208281            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:10 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-103644 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ default-k8s-diff-port-103644 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p default-k8s-diff-port-103644 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-103644 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:11 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 03:04:50
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 03:04:50.472071  573699 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:04:50.472443  573699 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:04:50.472454  573699 out.go:374] Setting ErrFile to fd 2...
	I1219 03:04:50.472463  573699 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:04:50.473301  573699 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-253859/.minikube/bin
	I1219 03:04:50.474126  573699 out.go:368] Setting JSON to false
	I1219 03:04:50.476304  573699 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6429,"bootTime":1766107061,"procs":363,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 03:04:50.476440  573699 start.go:143] virtualization: kvm guest
	I1219 03:04:50.478144  573699 out.go:179] * [default-k8s-diff-port-103644] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 03:04:50.479945  573699 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 03:04:50.480003  573699 notify.go:221] Checking for updates...
	I1219 03:04:50.482332  573699 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 03:04:50.483901  573699 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-253859/kubeconfig
	I1219 03:04:50.485635  573699 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-253859/.minikube
	I1219 03:04:50.489602  573699 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 03:04:50.493460  573699 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 03:04:48.691145  569947 cli_runner.go:164] Run: docker network inspect no-preload-208281 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1219 03:04:48.711282  569947 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1219 03:04:48.716221  569947 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:04:48.729144  569947 kubeadm.go:884] updating cluster {Name:no-preload-208281 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-208281 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSi
ze:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 03:04:48.729324  569947 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1219 03:04:48.729375  569947 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:04:48.763109  569947 containerd.go:627] all images are preloaded for containerd runtime.
	I1219 03:04:48.763136  569947 cache_images.go:86] Images are preloaded, skipping loading
	I1219 03:04:48.763146  569947 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1219 03:04:48.763264  569947 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-208281 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-208281 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 03:04:48.763347  569947 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1219 03:04:48.796269  569947 cni.go:84] Creating CNI manager for ""
	I1219 03:04:48.796300  569947 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1219 03:04:48.796329  569947 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1219 03:04:48.796369  569947 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-208281 NodeName:no-preload-208281 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 03:04:48.796558  569947 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-208281"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 03:04:48.796669  569947 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1219 03:04:48.808026  569947 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 03:04:48.808102  569947 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 03:04:48.819240  569947 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1219 03:04:48.836384  569947 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1219 03:04:48.852550  569947 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2234 bytes)
	I1219 03:04:48.869275  569947 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1219 03:04:48.873704  569947 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:04:48.886490  569947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:04:48.994443  569947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:04:49.020494  569947 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/no-preload-208281 for IP: 192.168.85.2
	I1219 03:04:49.020518  569947 certs.go:195] generating shared ca certs ...
	I1219 03:04:49.020533  569947 certs.go:227] acquiring lock for ca certs: {Name:mk50e31410087b5c6cdb0986368a8c9100618403 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:04:49.020722  569947 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-253859/.minikube/ca.key
	I1219 03:04:49.020809  569947 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-253859/.minikube/proxy-client-ca.key
	I1219 03:04:49.020826  569947 certs.go:257] generating profile certs ...
	I1219 03:04:49.020975  569947 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/no-preload-208281/client.key
	I1219 03:04:49.021064  569947 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/no-preload-208281/apiserver.key.8f504093
	I1219 03:04:49.021159  569947 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/no-preload-208281/proxy-client.key
	I1219 03:04:49.021324  569947 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/257493.pem (1338 bytes)
	W1219 03:04:49.021373  569947 certs.go:480] ignoring /home/jenkins/minikube-integration/22230-253859/.minikube/certs/257493_empty.pem, impossibly tiny 0 bytes
	I1219 03:04:49.021389  569947 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca-key.pem (1679 bytes)
	I1219 03:04:49.021430  569947 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca.pem (1078 bytes)
	I1219 03:04:49.021457  569947 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/cert.pem (1123 bytes)
	I1219 03:04:49.021480  569947 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/key.pem (1675 bytes)
	I1219 03:04:49.021525  569947 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/files/etc/ssl/certs/2574932.pem (1708 bytes)
	I1219 03:04:49.022292  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 03:04:49.050958  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 03:04:49.072475  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 03:04:49.095867  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1219 03:04:49.124289  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/no-preload-208281/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1219 03:04:49.150664  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/no-preload-208281/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1219 03:04:49.188239  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/no-preload-208281/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 03:04:49.216791  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/no-preload-208281/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1219 03:04:49.242767  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 03:04:49.264732  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/certs/257493.pem --> /usr/share/ca-certificates/257493.pem (1338 bytes)
	I1219 03:04:49.286635  569947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/files/etc/ssl/certs/2574932.pem --> /usr/share/ca-certificates/2574932.pem (1708 bytes)
	I1219 03:04:49.313716  569947 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 03:04:49.329405  569947 ssh_runner.go:195] Run: openssl version
	I1219 03:04:49.337082  569947 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:04:49.347002  569947 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 03:04:49.355979  569947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:04:49.360975  569947 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:26 /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:04:49.361048  569947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:04:49.457547  569947 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 03:04:49.470846  569947 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/257493.pem
	I1219 03:04:49.484764  569947 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/257493.pem /etc/ssl/certs/257493.pem
	I1219 03:04:49.501564  569947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/257493.pem
	I1219 03:04:49.510435  569947 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 19 02:34 /usr/share/ca-certificates/257493.pem
	I1219 03:04:49.510523  569947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/257493.pem
	I1219 03:04:49.583657  569947 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1219 03:04:49.596341  569947 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2574932.pem
	I1219 03:04:49.615267  569947 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2574932.pem /etc/ssl/certs/2574932.pem
	I1219 03:04:49.637741  569947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2574932.pem
	I1219 03:04:49.651506  569947 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 19 02:34 /usr/share/ca-certificates/2574932.pem
	I1219 03:04:49.651606  569947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2574932.pem
	I1219 03:04:49.719393  569947 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1219 03:04:49.738446  569947 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 03:04:49.759885  569947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1219 03:04:49.839963  569947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1219 03:04:49.916940  569947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1219 03:04:49.984478  569947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1219 03:04:50.052790  569947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1219 03:04:50.213057  569947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1219 03:04:50.323267  569947 kubeadm.go:401] StartCluster: {Name:no-preload-208281 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-208281 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:04:50.323602  569947 cri.go:57] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1219 03:04:50.323919  569947 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:04:50.475134  569947 cri.go:92] found id: "cacf5e35e790ac46df5f1339f8e234ed8ffd764c954e85131b71bce9fc1293aa"
	I1219 03:04:50.475159  569947 cri.go:92] found id: "fe48441f2b926fe3ab51c21e04ab3ca395718e082d30ebb97fb87169ccd0e569"
	I1219 03:04:50.475166  569947 cri.go:92] found id: "e2b6de3f6ca9fb6a4cf59ba73f641b81403ff210fb542805c0ecfb406f229a1a"
	I1219 03:04:50.475171  569947 cri.go:92] found id: "496cc0b515ef5f8631302beefecc3380ae6563f4ed57ee1a286e52a06f30249c"
	I1219 03:04:50.475175  569947 cri.go:92] found id: "0cbaba368082a3f121bc09e60595d1ff592ec5796ecc2115579e6f149ade94d7"
	I1219 03:04:50.475180  569947 cri.go:92] found id: "1b139b90f72cc73cf0a391fb1b6dde88df245b3d92b6a686104996e14c38330c"
	I1219 03:04:50.475184  569947 cri.go:92] found id: "6bee3b8cfdfc0cfbd9e189118939b5349e2c5e27938c7584f8e1081b62329aa5"
	I1219 03:04:50.475506  569947 cri.go:92] found id: "6647bd08b2c7d47c6eead48e5e683e5121b40b05c0fe31a31cf2329b794cf45e"
	I1219 03:04:50.475532  569947 cri.go:92] found id: "0457ac1d0e6da6c1cc69c3583af9c20d14f2274fefa0497dcdce311fa1b7a1d9"
	I1219 03:04:50.475549  569947 cri.go:92] found id: "7dd5f1a15d9551e729b3a73b8dd9b36565eb7c597d451995076e1f6606444459"
	I1219 03:04:50.475553  569947 cri.go:92] found id: "06cb2742e807f0ab357f122a308cc5bad433f366487ddbbf67177cc3d6f74e2b"
	I1219 03:04:50.475557  569947 cri.go:92] found id: "ee999ba4f0b47eadf10730be5384ab5f3b45f01128186cb9ab42b8df5c0b7400"
	I1219 03:04:50.475562  569947 cri.go:92] found id: ""
	I1219 03:04:50.475632  569947 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1219 03:04:50.558499  569947 cri.go:119] JSON = [{"ociVersion":"1.2.1","id":"2379cbb88b4433650ade336b6320fe20a8e4907825957318999810a514165e6e","pid":805,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2379cbb88b4433650ade336b6320fe20a8e4907825957318999810a514165e6e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2379cbb88b4433650ade336b6320fe20a8e4907825957318999810a514165e6e/rootfs","created":"2025-12-19T03:04:49.720787385Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"2379cbb88b4433650ade336b6320fe20a8e4907825957318999810a514165e6e","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-no-preload-208281_355754afcd0ce2d7bab6c853c60e836c","io.kubernetes.cri.sandbox-memor
y":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-no-preload-208281","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"355754afcd0ce2d7bab6c853c60e836c"},"owner":"root"},{"ociVersion":"1.2.1","id":"38931718ec045783470e9a1d913a9a9c05077db3a90703385f4fb4a1667664f2","pid":857,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/38931718ec045783470e9a1d913a9a9c05077db3a90703385f4fb4a1667664f2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/38931718ec045783470e9a1d913a9a9c05077db3a90703385f4fb4a1667664f2/rootfs","created":"2025-12-19T03:04:49.778097457Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"38931718ec045783470e9a1d913a9a9c05077db3a90703385f4fb4a1667664f2","io.kubernetes.c
ri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-no-preload-208281_e43ae2e7891eaa1ff806e636f311fb81","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-no-preload-208281","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"e43ae2e7891eaa1ff806e636f311fb81"},"owner":"root"},{"ociVersion":"1.2.1","id":"46efefa83a3c7ef9fc0acf51455ccd0f9b6e6fce80a57e43de82b11915e2ee07","pid":838,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/46efefa83a3c7ef9fc0acf51455ccd0f9b6e6fce80a57e43de82b11915e2ee07","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/46efefa83a3c7ef9fc0acf51455ccd0f9b6e6fce80a57e43de82b11915e2ee07/rootfs","created":"2025-12-19T03:04:49.777265025Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kub
ernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"46efefa83a3c7ef9fc0acf51455ccd0f9b6e6fce80a57e43de82b11915e2ee07","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-no-preload-208281_80442131b1359e6657f2959b40f80467","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-no-preload-208281","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"80442131b1359e6657f2959b40f80467"},"owner":"root"},{"ociVersion":"1.2.1","id":"496cc0b515ef5f8631302beefecc3380ae6563f4ed57ee1a286e52a06f30249c","pid":902,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/496cc0b515ef5f8631302beefecc3380ae6563f4ed57ee1a286e52a06f30249c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/496cc0b515ef5f8631302beefecc3380ae6563f4ed57ee1a286e52a06f30249c/rootfs","created":"2025-12-19T03:04:49.944110218Z","annotations":{"io.kubernetes.cri.container-name":"kube-apis
erver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.35.0-rc.1","io.kubernetes.cri.sandbox-id":"2379cbb88b4433650ade336b6320fe20a8e4907825957318999810a514165e6e","io.kubernetes.cri.sandbox-name":"kube-apiserver-no-preload-208281","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"355754afcd0ce2d7bab6c853c60e836c"},"owner":"root"},{"ociVersion":"1.2.1","id":"7d8e57ec3badf4087436cc732a67ce490fa8a67cdaa8f122d22d1d2973dd4db3","pid":845,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7d8e57ec3badf4087436cc732a67ce490fa8a67cdaa8f122d22d1d2973dd4db3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7d8e57ec3badf4087436cc732a67ce490fa8a67cdaa8f122d22d1d2973dd4db3/rootfs","created":"2025-12-19T03:04:49.76636358Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-c
pu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"7d8e57ec3badf4087436cc732a67ce490fa8a67cdaa8f122d22d1d2973dd4db3","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-no-preload-208281_93a9992ff7a9c41e489b493737b5b488","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-no-preload-208281","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"93a9992ff7a9c41e489b493737b5b488"},"owner":"root"},{"ociVersion":"1.2.1","id":"cacf5e35e790ac46df5f1339f8e234ed8ffd764c954e85131b71bce9fc1293aa","pid":964,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cacf5e35e790ac46df5f1339f8e234ed8ffd764c954e85131b71bce9fc1293aa","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cacf5e35e790ac46df5f1339f8e234ed8ffd764c954e85131b71bce9fc1293aa/rootfs","created":"2025-12-19T03:04:50.065275653Z","annotations":{"io.kubernetes
.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.35.0-rc.1","io.kubernetes.cri.sandbox-id":"38931718ec045783470e9a1d913a9a9c05077db3a90703385f4fb4a1667664f2","io.kubernetes.cri.sandbox-name":"kube-scheduler-no-preload-208281","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"e43ae2e7891eaa1ff806e636f311fb81"},"owner":"root"},{"ociVersion":"1.2.1","id":"e2b6de3f6ca9fb6a4cf59ba73f641b81403ff210fb542805c0ecfb406f229a1a","pid":928,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e2b6de3f6ca9fb6a4cf59ba73f641b81403ff210fb542805c0ecfb406f229a1a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e2b6de3f6ca9fb6a4cf59ba73f641b81403ff210fb542805c0ecfb406f229a1a/rootfs","created":"2025-12-19T03:04:50.024946214Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-
name":"registry.k8s.io/etcd:3.6.6-0","io.kubernetes.cri.sandbox-id":"7d8e57ec3badf4087436cc732a67ce490fa8a67cdaa8f122d22d1d2973dd4db3","io.kubernetes.cri.sandbox-name":"etcd-no-preload-208281","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"93a9992ff7a9c41e489b493737b5b488"},"owner":"root"},{"ociVersion":"1.2.1","id":"fe48441f2b926fe3ab51c21e04ab3ca395718e082d30ebb97fb87169ccd0e569","pid":979,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe48441f2b926fe3ab51c21e04ab3ca395718e082d30ebb97fb87169ccd0e569","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe48441f2b926fe3ab51c21e04ab3ca395718e082d30ebb97fb87169ccd0e569/rootfs","created":"2025-12-19T03:04:50.153274168Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.35.0-rc.1","io.kubernetes.cri.sandbox-id":"46efefa83a3c7ef9fc0acf5
1455ccd0f9b6e6fce80a57e43de82b11915e2ee07","io.kubernetes.cri.sandbox-name":"kube-controller-manager-no-preload-208281","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"80442131b1359e6657f2959b40f80467"},"owner":"root"}]
	I1219 03:04:50.559253  569947 cri.go:129] list returned 8 containers
	I1219 03:04:50.559288  569947 cri.go:132] container: {ID:2379cbb88b4433650ade336b6320fe20a8e4907825957318999810a514165e6e Status:running}
	I1219 03:04:50.559310  569947 cri.go:134] skipping 2379cbb88b4433650ade336b6320fe20a8e4907825957318999810a514165e6e - not in ps
	I1219 03:04:50.559318  569947 cri.go:132] container: {ID:38931718ec045783470e9a1d913a9a9c05077db3a90703385f4fb4a1667664f2 Status:running}
	I1219 03:04:50.559326  569947 cri.go:134] skipping 38931718ec045783470e9a1d913a9a9c05077db3a90703385f4fb4a1667664f2 - not in ps
	I1219 03:04:50.559332  569947 cri.go:132] container: {ID:46efefa83a3c7ef9fc0acf51455ccd0f9b6e6fce80a57e43de82b11915e2ee07 Status:running}
	I1219 03:04:50.559338  569947 cri.go:134] skipping 46efefa83a3c7ef9fc0acf51455ccd0f9b6e6fce80a57e43de82b11915e2ee07 - not in ps
	I1219 03:04:50.559343  569947 cri.go:132] container: {ID:496cc0b515ef5f8631302beefecc3380ae6563f4ed57ee1a286e52a06f30249c Status:running}
	I1219 03:04:50.559363  569947 cri.go:138] skipping {496cc0b515ef5f8631302beefecc3380ae6563f4ed57ee1a286e52a06f30249c running}: state = "running", want "paused"
	I1219 03:04:50.559373  569947 cri.go:132] container: {ID:7d8e57ec3badf4087436cc732a67ce490fa8a67cdaa8f122d22d1d2973dd4db3 Status:running}
	I1219 03:04:50.559381  569947 cri.go:134] skipping 7d8e57ec3badf4087436cc732a67ce490fa8a67cdaa8f122d22d1d2973dd4db3 - not in ps
	I1219 03:04:50.559386  569947 cri.go:132] container: {ID:cacf5e35e790ac46df5f1339f8e234ed8ffd764c954e85131b71bce9fc1293aa Status:running}
	I1219 03:04:50.559393  569947 cri.go:138] skipping {cacf5e35e790ac46df5f1339f8e234ed8ffd764c954e85131b71bce9fc1293aa running}: state = "running", want "paused"
	I1219 03:04:50.559400  569947 cri.go:132] container: {ID:e2b6de3f6ca9fb6a4cf59ba73f641b81403ff210fb542805c0ecfb406f229a1a Status:running}
	I1219 03:04:50.559406  569947 cri.go:138] skipping {e2b6de3f6ca9fb6a4cf59ba73f641b81403ff210fb542805c0ecfb406f229a1a running}: state = "running", want "paused"
	I1219 03:04:50.559412  569947 cri.go:132] container: {ID:fe48441f2b926fe3ab51c21e04ab3ca395718e082d30ebb97fb87169ccd0e569 Status:running}
	I1219 03:04:50.559419  569947 cri.go:138] skipping {fe48441f2b926fe3ab51c21e04ab3ca395718e082d30ebb97fb87169ccd0e569 running}: state = "running", want "paused"
	I1219 03:04:50.559472  569947 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1219 03:04:50.576564  569947 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1219 03:04:50.576683  569947 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1219 03:04:50.576777  569947 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1219 03:04:50.600225  569947 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1219 03:04:50.601759  569947 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-208281" does not appear in /home/jenkins/minikube-integration/22230-253859/kubeconfig
	I1219 03:04:50.605721  569947 kubeconfig.go:62] /home/jenkins/minikube-integration/22230-253859/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-208281" cluster setting kubeconfig missing "no-preload-208281" context setting]
	I1219 03:04:50.610686  569947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-253859/kubeconfig: {Name:mk882428f840659847d4e22cffee2d7775067610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:04:50.613392  569947 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1219 03:04:50.647032  569947 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1219 03:04:50.647196  569947 kubeadm.go:602] duration metric: took 70.481994ms to restartPrimaryControlPlane
	I1219 03:04:50.647478  569947 kubeadm.go:403] duration metric: took 324.224528ms to StartCluster
	I1219 03:04:50.647573  569947 settings.go:142] acquiring lock: {Name:mkabb1ebf75b28a37c7b2b053110889b555ff453 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:04:50.647991  569947 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-253859/kubeconfig
	I1219 03:04:50.652215  569947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-253859/kubeconfig: {Name:mk882428f840659847d4e22cffee2d7775067610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:04:50.652837  569947 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:04:50.652966  569947 addons.go:70] Setting storage-provisioner=true in profile "no-preload-208281"
	I1219 03:04:50.652984  569947 addons.go:239] Setting addon storage-provisioner=true in "no-preload-208281"
	W1219 03:04:50.652993  569947 addons.go:248] addon storage-provisioner should already be in state true
	I1219 03:04:50.653027  569947 host.go:66] Checking if "no-preload-208281" exists ...
	I1219 03:04:50.653048  569947 config.go:182] Loaded profile config "no-preload-208281": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1219 03:04:50.653120  569947 addons.go:70] Setting default-storageclass=true in profile "no-preload-208281"
	I1219 03:04:50.653135  569947 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-208281"
	I1219 03:04:50.653460  569947 cli_runner.go:164] Run: docker container inspect no-preload-208281 --format={{.State.Status}}
	I1219 03:04:50.653534  569947 cli_runner.go:164] Run: docker container inspect no-preload-208281 --format={{.State.Status}}
	I1219 03:04:50.655588  569947 addons.go:70] Setting metrics-server=true in profile "no-preload-208281"
	I1219 03:04:50.655611  569947 addons.go:239] Setting addon metrics-server=true in "no-preload-208281"
	W1219 03:04:50.655621  569947 addons.go:248] addon metrics-server should already be in state true
	I1219 03:04:50.655656  569947 host.go:66] Checking if "no-preload-208281" exists ...
	I1219 03:04:50.656118  569947 cli_runner.go:164] Run: docker container inspect no-preload-208281 --format={{.State.Status}}
	I1219 03:04:50.656525  569947 addons.go:70] Setting dashboard=true in profile "no-preload-208281"
	I1219 03:04:50.656563  569947 addons.go:239] Setting addon dashboard=true in "no-preload-208281"
	W1219 03:04:50.656574  569947 addons.go:248] addon dashboard should already be in state true
	I1219 03:04:50.656622  569947 host.go:66] Checking if "no-preload-208281" exists ...
	I1219 03:04:50.657316  569947 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1219 03:04:50.657617  569947 cli_runner.go:164] Run: docker container inspect no-preload-208281 --format={{.State.Status}}
	I1219 03:04:50.660722  569947 out.go:179] * Verifying Kubernetes components...
	I1219 03:04:50.661854  569947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:04:50.707508  569947 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1219 03:04:50.708775  569947 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1219 03:04:50.708812  569947 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1219 03:04:50.708834  569947 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:04:50.495202  573699 config.go:182] Loaded profile config "default-k8s-diff-port-103644": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 03:04:50.495941  573699 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 03:04:50.539840  573699 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 03:04:50.540119  573699 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:04:50.710990  573699 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-19 03:04:50.671412726 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:04:50.711217  573699 docker.go:319] overlay module found
	I1219 03:04:50.713697  573699 out.go:179] * Using the docker driver based on existing profile
	I1219 03:04:50.714949  573699 start.go:309] selected driver: docker
	I1219 03:04:50.714970  573699 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-103644 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-103644 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:04:50.715089  573699 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 03:04:50.716020  573699 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:04:50.884011  573699 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-19 03:04:50.859280212 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:04:50.884478  573699 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:04:50.884531  573699 cni.go:84] Creating CNI manager for ""
	I1219 03:04:50.884789  573699 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1219 03:04:50.884940  573699 start.go:353] cluster config:
	{Name:default-k8s-diff-port-103644 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-103644 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:04:50.887403  573699 out.go:179] * Starting "default-k8s-diff-port-103644" primary control-plane node in "default-k8s-diff-port-103644" cluster
	I1219 03:04:50.888689  573699 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1219 03:04:50.889896  573699 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1219 03:04:50.891030  573699 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1219 03:04:50.891092  573699 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-253859/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-amd64.tar.lz4
	I1219 03:04:50.891106  573699 cache.go:65] Caching tarball of preloaded images
	I1219 03:04:50.891194  573699 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1219 03:04:50.891211  573699 preload.go:238] Found /home/jenkins/minikube-integration/22230-253859/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1219 03:04:50.891221  573699 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on containerd
	I1219 03:04:50.891356  573699 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/config.json ...
	I1219 03:04:50.932991  573699 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1219 03:04:50.933024  573699 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1219 03:04:50.933040  573699 cache.go:243] Successfully downloaded all kic artifacts
	I1219 03:04:50.933079  573699 start.go:360] acquireMachinesLock for default-k8s-diff-port-103644: {Name:mk39933c40de3c92aeeb6b9d20d3c90e6af0f1fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:04:50.933158  573699 start.go:364] duration metric: took 48.804µs to acquireMachinesLock for "default-k8s-diff-port-103644"
	I1219 03:04:50.933177  573699 start.go:96] Skipping create...Using existing machine configuration
	I1219 03:04:50.933183  573699 fix.go:54] fixHost starting: 
	I1219 03:04:50.933489  573699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103644 --format={{.State.Status}}
	I1219 03:04:50.973427  573699 fix.go:112] recreateIfNeeded on default-k8s-diff-port-103644: state=Stopped err=<nil>
	W1219 03:04:50.973619  573699 fix.go:138] unexpected machine state, will restart: <nil>
	I1219 03:04:50.748260  566718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (4.195228143s)
	I1219 03:04:50.748361  566718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:04:51.828106  566718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml: (1.079706419s)
	I1219 03:04:51.828277  566718 addons.go:500] Verifying addon dashboard=true in "old-k8s-version-002036"
	I1219 03:04:51.828773  566718 cli_runner.go:164] Run: docker container inspect old-k8s-version-002036 --format={{.State.Status}}
	I1219 03:04:51.856291  566718 out.go:179] * Verifying dashboard addon...
	I1219 03:04:50.708886  569947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208281
	I1219 03:04:50.709108  569947 addons.go:239] Setting addon default-storageclass=true in "no-preload-208281"
	W1219 03:04:50.709132  569947 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:04:50.709161  569947 host.go:66] Checking if "no-preload-208281" exists ...
	I1219 03:04:50.709725  569947 cli_runner.go:164] Run: docker container inspect no-preload-208281 --format={{.State.Status}}
	I1219 03:04:50.710101  569947 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:04:50.710123  569947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:04:50.710173  569947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208281
	I1219 03:04:50.716696  569947 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:04:50.716718  569947 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:04:50.716777  569947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208281
	I1219 03:04:50.770714  569947 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:04:50.770743  569947 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:04:50.770811  569947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-208281
	I1219 03:04:50.772323  569947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/no-preload-208281/id_rsa Username:docker}
	I1219 03:04:50.774548  569947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/no-preload-208281/id_rsa Username:docker}
	I1219 03:04:50.782771  569947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/no-preload-208281/id_rsa Username:docker}
	I1219 03:04:50.818125  569947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/no-preload-208281/id_rsa Username:docker}
	I1219 03:04:50.922492  569947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:04:50.961986  569947 node_ready.go:35] waiting up to 6m0s for node "no-preload-208281" to be "Ready" ...
	I1219 03:04:50.964889  569947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:04:50.991305  569947 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1219 03:04:50.991337  569947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1219 03:04:50.997863  569947 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:04:51.029470  569947 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1219 03:04:51.029507  569947 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1219 03:04:51.077218  569947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:04:51.083520  569947 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:04:51.083552  569947 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1219 03:04:51.107276  569947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:04:52.474618  569947 node_ready.go:49] node "no-preload-208281" is "Ready"
	I1219 03:04:52.474662  569947 node_ready.go:38] duration metric: took 1.512481187s for node "no-preload-208281" to be "Ready" ...
	I1219 03:04:52.474682  569947 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:04:52.474743  569947 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:04:51.142743  568301 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.559306992s)
	I1219 03:04:51.142940  568301 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.499593696s)
	I1219 03:04:51.143060  568301 addons.go:500] Verifying addon metrics-server=true in "embed-certs-536489"
	I1219 03:04:51.143722  568301 cli_runner.go:164] Run: docker container inspect embed-certs-536489 --format={{.State.Status}}
	I1219 03:04:51.144038  568301 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.580066034s)
	I1219 03:04:52.990446  568301 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.445475643s)
	I1219 03:04:52.990490  568301 api_server.go:72] duration metric: took 5.685402741s to wait for apiserver process to appear ...
	I1219 03:04:52.990498  568301 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:04:52.990528  568301 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1219 03:04:52.992275  568301 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (4.373532841s)
	I1219 03:04:52.992364  568301 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:04:53.002104  568301 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1219 03:04:53.006331  568301 api_server.go:141] control plane version: v1.34.3
	I1219 03:04:53.006385  568301 api_server.go:131] duration metric: took 15.878835ms to wait for apiserver health ...
	I1219 03:04:53.006399  568301 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:04:53.016977  568301 system_pods.go:59] 9 kube-system pods found
	I1219 03:04:53.017141  568301 system_pods.go:61] "coredns-66bc5c9577-qmb9z" [dd0dceb8-d48d-4215-82f5-df001a8ffe5f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:04:53.017157  568301 system_pods.go:61] "etcd-embed-certs-536489" [b3cbe090-1470-477e-87da-d93ca2bf3394] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:04:53.017165  568301 system_pods.go:61] "kindnet-kzlhv" [2a4d0c65-8aff-4b2f-bb3d-d79b89f560ca] Running
	I1219 03:04:53.017184  568301 system_pods.go:61] "kube-apiserver-embed-certs-536489" [18c7bfaa-73a6-457a-9a58-05d2ffa0de1c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:04:53.017193  568301 system_pods.go:61] "kube-controller-manager-embed-certs-536489" [d657289e-8fd1-4ed3-94c2-194aa95545f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:04:53.017199  568301 system_pods.go:61] "kube-proxy-qhlhx" [bc7f26c2-aed8-4540-bd1f-0ee0b1974137] Running
	I1219 03:04:53.017212  568301 system_pods.go:61] "kube-scheduler-embed-certs-536489" [72b72681-cda6-48b6-9f43-9c9b125883b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:04:53.017219  568301 system_pods.go:61] "metrics-server-746fcd58dc-8458x" [47114157-df98-40be-815f-7437499ca215] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:04:53.017225  568301 system_pods.go:61] "storage-provisioner" [51c90b41-88a3-4279-84d8-13a52b7ef246] Running
	I1219 03:04:53.017233  568301 system_pods.go:74] duration metric: took 10.826754ms to wait for pod list to return data ...
	I1219 03:04:53.017244  568301 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:04:53.020879  568301 default_sa.go:45] found service account: "default"
	I1219 03:04:53.020911  568301 default_sa.go:55] duration metric: took 3.659738ms for default service account to be created ...
	I1219 03:04:53.020925  568301 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:04:53.118092  568301 system_pods.go:86] 9 kube-system pods found
	I1219 03:04:53.118237  568301 system_pods.go:89] "coredns-66bc5c9577-qmb9z" [dd0dceb8-d48d-4215-82f5-df001a8ffe5f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:04:53.118277  568301 system_pods.go:89] "etcd-embed-certs-536489" [b3cbe090-1470-477e-87da-d93ca2bf3394] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:04:53.118286  568301 system_pods.go:89] "kindnet-kzlhv" [2a4d0c65-8aff-4b2f-bb3d-d79b89f560ca] Running
	I1219 03:04:53.118334  568301 system_pods.go:89] "kube-apiserver-embed-certs-536489" [18c7bfaa-73a6-457a-9a58-05d2ffa0de1c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:04:53.118346  568301 system_pods.go:89] "kube-controller-manager-embed-certs-536489" [d657289e-8fd1-4ed3-94c2-194aa95545f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:04:53.118360  568301 system_pods.go:89] "kube-proxy-qhlhx" [bc7f26c2-aed8-4540-bd1f-0ee0b1974137] Running
	I1219 03:04:53.118368  568301 system_pods.go:89] "kube-scheduler-embed-certs-536489" [72b72681-cda6-48b6-9f43-9c9b125883b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:04:53.118508  568301 system_pods.go:89] "metrics-server-746fcd58dc-8458x" [47114157-df98-40be-815f-7437499ca215] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:04:53.118523  568301 system_pods.go:89] "storage-provisioner" [51c90b41-88a3-4279-84d8-13a52b7ef246] Running
	I1219 03:04:53.118535  568301 system_pods.go:126] duration metric: took 97.602528ms to wait for k8s-apps to be running ...
	I1219 03:04:53.118546  568301 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:04:53.118629  568301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:04:53.213539  568301 addons.go:500] Verifying addon dashboard=true in "embed-certs-536489"
	I1219 03:04:53.213985  568301 cli_runner.go:164] Run: docker container inspect embed-certs-536489 --format={{.State.Status}}
	I1219 03:04:53.214117  568301 system_svc.go:56] duration metric: took 95.561896ms WaitForService to wait for kubelet
	I1219 03:04:53.214162  568301 kubeadm.go:587] duration metric: took 5.909072172s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:04:53.214187  568301 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:04:53.220086  568301 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:04:53.220122  568301 node_conditions.go:123] node cpu capacity is 8
	I1219 03:04:53.220143  568301 node_conditions.go:105] duration metric: took 5.94983ms to run NodePressure ...
	I1219 03:04:53.220159  568301 start.go:242] waiting for startup goroutines ...
	I1219 03:04:53.239792  568301 out.go:179] * Verifying dashboard addon...
	I1219 03:04:51.859124  566718 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:04:51.862362  566718 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:04:53.241980  568301 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:04:53.245176  568301 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:04:53.747449  568301 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:04:53.747476  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:54.245867  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:54.747323  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:50.976005  573699 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-103644" ...
	I1219 03:04:50.976124  573699 cli_runner.go:164] Run: docker start default-k8s-diff-port-103644
	I1219 03:04:51.482862  573699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103644 --format={{.State.Status}}
	I1219 03:04:51.514418  573699 kic.go:430] container "default-k8s-diff-port-103644" state is running.
	I1219 03:04:51.515091  573699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-103644
	I1219 03:04:51.545304  573699 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/config.json ...
	I1219 03:04:51.545913  573699 machine.go:94] provisionDockerMachine start ...
	I1219 03:04:51.546012  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:51.578064  573699 main.go:144] libmachine: Using SSH client type: native
	I1219 03:04:51.578471  573699 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1219 03:04:51.578526  573699 main.go:144] libmachine: About to run SSH command:
	hostname
	I1219 03:04:51.580615  573699 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46348->127.0.0.1:33098: read: connection reset by peer
	I1219 03:04:54.740022  573699 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-103644
	
	I1219 03:04:54.740053  573699 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-103644"
	I1219 03:04:54.740121  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:54.764557  573699 main.go:144] libmachine: Using SSH client type: native
	I1219 03:04:54.764812  573699 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1219 03:04:54.764832  573699 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-103644 && echo "default-k8s-diff-port-103644" | sudo tee /etc/hostname
	I1219 03:04:54.940991  573699 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-103644
	
	I1219 03:04:54.941090  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:54.961163  573699 main.go:144] libmachine: Using SSH client type: native
	I1219 03:04:54.961447  573699 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1219 03:04:54.961472  573699 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-103644' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-103644/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-103644' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 03:04:55.112211  573699 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:04:55.112238  573699 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22230-253859/.minikube CaCertPath:/home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22230-253859/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22230-253859/.minikube}
	I1219 03:04:55.112272  573699 ubuntu.go:190] setting up certificates
	I1219 03:04:55.112285  573699 provision.go:84] configureAuth start
	I1219 03:04:55.112354  573699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-103644
	I1219 03:04:55.131633  573699 provision.go:143] copyHostCerts
	I1219 03:04:55.131701  573699 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-253859/.minikube/ca.pem, removing ...
	I1219 03:04:55.131722  573699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-253859/.minikube/ca.pem
	I1219 03:04:55.131814  573699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22230-253859/.minikube/ca.pem (1078 bytes)
	I1219 03:04:55.131992  573699 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-253859/.minikube/cert.pem, removing ...
	I1219 03:04:55.132009  573699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-253859/.minikube/cert.pem
	I1219 03:04:55.132066  573699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22230-253859/.minikube/cert.pem (1123 bytes)
	I1219 03:04:55.132178  573699 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-253859/.minikube/key.pem, removing ...
	I1219 03:04:55.132189  573699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-253859/.minikube/key.pem
	I1219 03:04:55.132230  573699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22230-253859/.minikube/key.pem (1675 bytes)
	I1219 03:04:55.132339  573699 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22230-253859/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-103644 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-103644 localhost minikube]
	I1219 03:04:55.201421  573699 provision.go:177] copyRemoteCerts
	I1219 03:04:55.201486  573699 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 03:04:55.201545  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:55.220254  573699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/default-k8s-diff-port-103644/id_rsa Username:docker}
	I1219 03:04:55.324809  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1219 03:04:55.344299  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1219 03:04:55.364633  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1219 03:04:55.383945  573699 provision.go:87] duration metric: took 271.644189ms to configureAuth
	I1219 03:04:55.383975  573699 ubuntu.go:206] setting minikube options for container-runtime
	I1219 03:04:55.384174  573699 config.go:182] Loaded profile config "default-k8s-diff-port-103644": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 03:04:55.384190  573699 machine.go:97] duration metric: took 3.838258422s to provisionDockerMachine
	I1219 03:04:55.384201  573699 start.go:293] postStartSetup for "default-k8s-diff-port-103644" (driver="docker")
	I1219 03:04:55.384218  573699 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 03:04:55.384292  573699 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 03:04:55.384363  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:55.402689  573699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/default-k8s-diff-port-103644/id_rsa Username:docker}
	I1219 03:04:55.509385  573699 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 03:04:55.513698  573699 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1219 03:04:55.513738  573699 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1219 03:04:55.513752  573699 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-253859/.minikube/addons for local assets ...
	I1219 03:04:55.513809  573699 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-253859/.minikube/files for local assets ...
	I1219 03:04:55.513923  573699 filesync.go:149] local asset: /home/jenkins/minikube-integration/22230-253859/.minikube/files/etc/ssl/certs/2574932.pem -> 2574932.pem in /etc/ssl/certs
	I1219 03:04:55.514061  573699 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 03:04:55.522610  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/files/etc/ssl/certs/2574932.pem --> /etc/ssl/certs/2574932.pem (1708 bytes)
	I1219 03:04:55.542136  573699 start.go:296] duration metric: took 157.911131ms for postStartSetup
	I1219 03:04:55.542235  573699 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 03:04:55.542278  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:55.560317  573699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/default-k8s-diff-port-103644/id_rsa Username:docker}
	I1219 03:04:55.676892  573699 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1219 03:04:55.683207  573699 fix.go:56] duration metric: took 4.75001221s for fixHost
	I1219 03:04:55.683240  573699 start.go:83] releasing machines lock for "default-k8s-diff-port-103644", held for 4.750073001s
	I1219 03:04:55.683337  573699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-103644
	I1219 03:04:55.706632  573699 ssh_runner.go:195] Run: cat /version.json
	I1219 03:04:55.706696  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:55.706708  573699 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 03:04:55.706796  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:55.729248  573699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/default-k8s-diff-port-103644/id_rsa Username:docker}
	I1219 03:04:55.729555  573699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/default-k8s-diff-port-103644/id_rsa Username:docker}
	I1219 03:04:55.832375  573699 ssh_runner.go:195] Run: systemctl --version
	I1219 03:04:55.888761  573699 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 03:04:55.894089  573699 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 03:04:55.894170  573699 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 03:04:55.902973  573699 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1219 03:04:55.903001  573699 start.go:496] detecting cgroup driver to use...
	I1219 03:04:55.903039  573699 detect.go:190] detected "systemd" cgroup driver on host os
	I1219 03:04:55.903123  573699 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1219 03:04:55.924413  573699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1219 03:04:55.939247  573699 docker.go:218] disabling cri-docker service (if available) ...
	I1219 03:04:55.939312  573699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1219 03:04:55.955848  573699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1219 03:04:55.970636  573699 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1219 03:04:56.060548  573699 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1219 03:04:56.151469  573699 docker.go:234] disabling docker service ...
	I1219 03:04:56.151544  573699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1219 03:04:56.168733  573699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1219 03:04:56.183785  573699 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1219 03:04:56.269923  573699 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1219 03:04:56.358410  573699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1219 03:04:56.374184  573699 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 03:04:56.391509  573699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1219 03:04:56.403885  573699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1219 03:04:56.418704  573699 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1219 03:04:56.418843  573699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1219 03:04:56.432502  573699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1219 03:04:56.446280  573699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1219 03:04:56.458732  573699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1219 03:04:56.471691  573699 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 03:04:56.482737  573699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1219 03:04:56.494667  573699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1219 03:04:56.507284  573699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1219 03:04:56.520174  573699 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 03:04:56.530768  573699 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 03:04:56.541170  573699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:04:56.646657  573699 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1219 03:04:56.781992  573699 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1219 03:04:56.782112  573699 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1219 03:04:56.788198  573699 start.go:564] Will wait 60s for crictl version
	I1219 03:04:56.788285  573699 ssh_runner.go:195] Run: which crictl
	I1219 03:04:56.793113  573699 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1219 03:04:56.836402  573699 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1219 03:04:56.836474  573699 ssh_runner.go:195] Run: containerd --version
	I1219 03:04:56.864133  573699 ssh_runner.go:195] Run: containerd --version
	I1219 03:04:56.898122  573699 out.go:179] * Preparing Kubernetes v1.34.3 on containerd 2.2.0 ...
	I1219 03:04:53.197683  569947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.23269288s)
	I1219 03:04:53.197756  569947 ssh_runner.go:235] Completed: test -f /usr/bin/helm: (2.199861038s)
	I1219 03:04:53.197848  569947 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:04:53.197862  569947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.120620602s)
	I1219 03:04:53.198058  569947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.09074876s)
	I1219 03:04:53.198096  569947 addons.go:500] Verifying addon metrics-server=true in "no-preload-208281"
	I1219 03:04:53.198179  569947 api_server.go:72] duration metric: took 2.540661776s to wait for apiserver process to appear ...
	I1219 03:04:53.198202  569947 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:04:53.198229  569947 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1219 03:04:53.198445  569947 cli_runner.go:164] Run: docker container inspect no-preload-208281 --format={{.State.Status}}
	I1219 03:04:53.205510  569947 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:04:53.205637  569947 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:04:53.205671  569947 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:04:53.698608  569947 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1219 03:04:53.705658  569947 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:04:53.705697  569947 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:04:54.198361  569947 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1219 03:04:54.202897  569947 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1219 03:04:54.204079  569947 api_server.go:141] control plane version: v1.35.0-rc.1
	I1219 03:04:54.204114  569947 api_server.go:131] duration metric: took 1.005903946s to wait for apiserver health ...
	I1219 03:04:54.204127  569947 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:04:54.208336  569947 system_pods.go:59] 9 kube-system pods found
	I1219 03:04:54.208377  569947 system_pods.go:61] "coredns-7d764666f9-hm5hz" [59441d91-a2b7-4d87-86d1-5ccaaec4e398] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:04:54.208389  569947 system_pods.go:61] "etcd-no-preload-208281" [edfe3a0f-95b1-49ee-8843-456255c2c573] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:04:54.208403  569947 system_pods.go:61] "kindnet-zbmbl" [e7d80d3e-7bf1-4e49-b7f9-c0911bbae20d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1219 03:04:54.208424  569947 system_pods.go:61] "kube-apiserver-no-preload-208281" [cee547f9-b6ae-4654-b92b-5cd3c5caae01] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:04:54.208437  569947 system_pods.go:61] "kube-controller-manager-no-preload-208281" [ed375fa0-c03b-42d4-9887-cbe64ed19aeb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:04:54.208444  569947 system_pods.go:61] "kube-proxy-xst8w" [24d16e46-3e1f-4d38-a486-8f15642946c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:04:54.208460  569947 system_pods.go:61] "kube-scheduler-no-preload-208281" [65c63f44-2615-47ca-9323-d80a812af086] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:04:54.208472  569947 system_pods.go:61] "metrics-server-5d785b57d4-zgcxz" [743fe6aa-308c-4f80-b7f5-c753be058b69] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:04:54.208477  569947 system_pods.go:61] "storage-provisioner" [5bab6e7d-150b-4c8e-ab0a-933ec046c863] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:04:54.208488  569947 system_pods.go:74] duration metric: took 4.352835ms to wait for pod list to return data ...
	I1219 03:04:54.208503  569947 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:04:54.211346  569947 default_sa.go:45] found service account: "default"
	I1219 03:04:54.211373  569947 default_sa.go:55] duration metric: took 2.86243ms for default service account to be created ...
	I1219 03:04:54.211385  569947 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:04:54.214301  569947 system_pods.go:86] 9 kube-system pods found
	I1219 03:04:54.214337  569947 system_pods.go:89] "coredns-7d764666f9-hm5hz" [59441d91-a2b7-4d87-86d1-5ccaaec4e398] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:04:54.214347  569947 system_pods.go:89] "etcd-no-preload-208281" [edfe3a0f-95b1-49ee-8843-456255c2c573] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:04:54.214360  569947 system_pods.go:89] "kindnet-zbmbl" [e7d80d3e-7bf1-4e49-b7f9-c0911bbae20d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1219 03:04:54.214369  569947 system_pods.go:89] "kube-apiserver-no-preload-208281" [cee547f9-b6ae-4654-b92b-5cd3c5caae01] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:04:54.214377  569947 system_pods.go:89] "kube-controller-manager-no-preload-208281" [ed375fa0-c03b-42d4-9887-cbe64ed19aeb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:04:54.214386  569947 system_pods.go:89] "kube-proxy-xst8w" [24d16e46-3e1f-4d38-a486-8f15642946c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:04:54.214402  569947 system_pods.go:89] "kube-scheduler-no-preload-208281" [65c63f44-2615-47ca-9323-d80a812af086] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:04:54.214411  569947 system_pods.go:89] "metrics-server-5d785b57d4-zgcxz" [743fe6aa-308c-4f80-b7f5-c753be058b69] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:04:54.214421  569947 system_pods.go:89] "storage-provisioner" [5bab6e7d-150b-4c8e-ab0a-933ec046c863] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:04:54.214431  569947 system_pods.go:126] duration metric: took 3.039478ms to wait for k8s-apps to be running ...
	I1219 03:04:54.214443  569947 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:04:54.214504  569947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:04:54.371132  569947 ssh_runner.go:235] Completed: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh": (1.165499888s)
	I1219 03:04:54.371186  569947 system_svc.go:56] duration metric: took 156.734958ms WaitForService to wait for kubelet
	I1219 03:04:54.371215  569947 kubeadm.go:587] duration metric: took 3.713723941s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:04:54.371244  569947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:04:54.371246  569947 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:04:54.374625  569947 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:04:54.374660  569947 node_conditions.go:123] node cpu capacity is 8
	I1219 03:04:54.374679  569947 node_conditions.go:105] duration metric: took 3.423654ms to run NodePressure ...
	I1219 03:04:54.374695  569947 start.go:242] waiting for startup goroutines ...
	I1219 03:04:57.635651  569947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.264367144s)
	I1219 03:04:57.635887  569947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:04:57.949184  569947 addons.go:500] Verifying addon dashboard=true in "no-preload-208281"
	I1219 03:04:57.949557  569947 cli_runner.go:164] Run: docker container inspect no-preload-208281 --format={{.State.Status}}
	I1219 03:04:57.976511  569947 out.go:179] * Verifying dashboard addon...
	I1219 03:04:56.899304  573699 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-103644 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1219 03:04:56.919626  573699 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1219 03:04:56.924517  573699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:04:56.937946  573699 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-103644 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-103644 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 03:04:56.938108  573699 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1219 03:04:56.938182  573699 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:04:56.968240  573699 containerd.go:627] all images are preloaded for containerd runtime.
	I1219 03:04:56.968267  573699 containerd.go:534] Images already preloaded, skipping extraction
	I1219 03:04:56.968327  573699 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:04:56.997359  573699 containerd.go:627] all images are preloaded for containerd runtime.
	I1219 03:04:56.997383  573699 cache_images.go:86] Images are preloaded, skipping loading
	I1219 03:04:56.997392  573699 kubeadm.go:935] updating node { 192.168.94.2 8444 v1.34.3 containerd true true} ...
	I1219 03:04:56.997515  573699 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-103644 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-103644 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 03:04:56.997591  573699 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1219 03:04:57.033726  573699 cni.go:84] Creating CNI manager for ""
	I1219 03:04:57.033760  573699 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1219 03:04:57.033788  573699 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1219 03:04:57.033818  573699 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-103644 NodeName:default-k8s-diff-port-103644 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 03:04:57.034013  573699 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-103644"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 03:04:57.034110  573699 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1219 03:04:57.054291  573699 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 03:04:57.054366  573699 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 03:04:57.069183  573699 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I1219 03:04:57.092986  573699 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1219 03:04:57.114537  573699 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2240 bytes)
	I1219 03:04:57.135768  573699 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1219 03:04:57.141830  573699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:04:57.157200  573699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:04:57.285296  573699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:04:57.321401  573699 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644 for IP: 192.168.94.2
	I1219 03:04:57.321425  573699 certs.go:195] generating shared ca certs ...
	I1219 03:04:57.321445  573699 certs.go:227] acquiring lock for ca certs: {Name:mk50e31410087b5c6cdb0986368a8c9100618403 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:04:57.321651  573699 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-253859/.minikube/ca.key
	I1219 03:04:57.321728  573699 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-253859/.minikube/proxy-client-ca.key
	I1219 03:04:57.321741  573699 certs.go:257] generating profile certs ...
	I1219 03:04:57.321895  573699 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/client.key
	I1219 03:04:57.321969  573699 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/apiserver.key.eac4724a
	I1219 03:04:57.322032  573699 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/proxy-client.key
	I1219 03:04:57.322452  573699 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/257493.pem (1338 bytes)
	W1219 03:04:57.322563  573699 certs.go:480] ignoring /home/jenkins/minikube-integration/22230-253859/.minikube/certs/257493_empty.pem, impossibly tiny 0 bytes
	I1219 03:04:57.322947  573699 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca-key.pem (1679 bytes)
	I1219 03:04:57.323038  573699 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca.pem (1078 bytes)
	I1219 03:04:57.323130  573699 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/cert.pem (1123 bytes)
	I1219 03:04:57.323212  573699 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/key.pem (1675 bytes)
	I1219 03:04:57.323310  573699 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/files/etc/ssl/certs/2574932.pem (1708 bytes)
	I1219 03:04:57.324261  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 03:04:57.367430  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 03:04:57.395772  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 03:04:57.447975  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1219 03:04:57.485724  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1219 03:04:57.550160  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1219 03:04:57.586359  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 03:04:57.650368  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1219 03:04:57.705528  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/certs/257493.pem --> /usr/share/ca-certificates/257493.pem (1338 bytes)
	I1219 03:04:57.753827  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/files/etc/ssl/certs/2574932.pem --> /usr/share/ca-certificates/2574932.pem (1708 bytes)
	I1219 03:04:57.796129  573699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 03:04:57.846633  573699 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 03:04:57.874041  573699 ssh_runner.go:195] Run: openssl version
	I1219 03:04:57.883186  573699 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:04:57.893276  573699 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 03:04:57.903322  573699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:04:57.908713  573699 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:26 /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:04:57.908788  573699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:04:57.959424  573699 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 03:04:57.975955  573699 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/257493.pem
	I1219 03:04:57.987406  573699 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/257493.pem /etc/ssl/certs/257493.pem
	I1219 03:04:57.999924  573699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/257493.pem
	I1219 03:04:58.007017  573699 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 19 02:34 /usr/share/ca-certificates/257493.pem
	I1219 03:04:58.007094  573699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/257493.pem
	I1219 03:04:58.066450  573699 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1219 03:04:58.084889  573699 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2574932.pem
	I1219 03:04:58.104839  573699 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2574932.pem /etc/ssl/certs/2574932.pem
	I1219 03:04:58.121039  573699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2574932.pem
	I1219 03:04:58.128831  573699 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 19 02:34 /usr/share/ca-certificates/2574932.pem
	I1219 03:04:58.128908  573699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2574932.pem
	I1219 03:04:58.238719  573699 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1219 03:04:58.257473  573699 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 03:04:58.269077  573699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1219 03:04:58.373050  573699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1219 03:04:58.472122  573699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1219 03:04:58.523474  573699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1219 03:04:58.567812  573699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1219 03:04:58.624150  573699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1219 03:04:58.663023  573699 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-103644 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-103644 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:04:58.663147  573699 cri.go:57] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1219 03:04:58.663225  573699 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:04:58.698055  573699 cri.go:92] found id: "19baa8a9717c29181666ffb3efdceea762fabc94b416863c5b53dc5a31a91c9c"
	I1219 03:04:58.698124  573699 cri.go:92] found id: "c2591b42ec56d4e729eb5050bae8a11599a1d848e2b407c779bd43019d38acd7"
	I1219 03:04:58.698150  573699 cri.go:92] found id: "a8858dc4fe6cf1222bb421499d0aaa734e7a15be6fd8f63ebd4a4f91de0515c1"
	I1219 03:04:58.698161  573699 cri.go:92] found id: "fc945986f0d8b51799666a6eb31fb812b90a17842cc95aa6d352ef48b62f0652"
	I1219 03:04:58.698166  573699 cri.go:92] found id: "36e5d694c8907189486901b0aad40fae056b856f62180e718acb50ce029ecd0d"
	I1219 03:04:58.698176  573699 cri.go:92] found id: "ed906de27de9c3783be2432f68b3e79b562b368da4fe5ddde333748fe58c2534"
	I1219 03:04:58.698180  573699 cri.go:92] found id: "72384f1ad49d78ec6df6f3d6d752884b9f63349eb0cf50bb752be26fdba3141d"
	I1219 03:04:58.698185  573699 cri.go:92] found id: "872846ec96d2d949d4a7f1a644718c7d0f80e0e28fcde0d7425648a2ffc89358"
	I1219 03:04:58.698189  573699 cri.go:92] found id: "dd57b66fad064bb42b78fb88e0736406dd552174022391793749a16c31b46525"
	I1219 03:04:58.698201  573699 cri.go:92] found id: "ee8c252f3d8f4aee54318214731e5386b3c089ad31c19108f2f01301f0698503"
	I1219 03:04:58.698208  573699 cri.go:92] found id: "069eca43bbcc0eb20a0e387dc92839a2dc811dd9acdf65cf2e9fe7389f32d3cd"
	I1219 03:04:58.698212  573699 cri.go:92] found id: "49ae9ae9664179746d1cc4f0b2904783aa2c4b1e268ce918bf8eb4eec3c61233"
	I1219 03:04:58.698216  573699 cri.go:92] found id: ""
	I1219 03:04:58.698271  573699 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1219 03:04:58.725948  573699 cri.go:119] JSON = [{"ociVersion":"1.2.1","id":"0074e46caf6e41a18e3d0f5443787c1bc06e1d75e37fd459fd30404491fd7537","pid":862,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0074e46caf6e41a18e3d0f5443787c1bc06e1d75e37fd459fd30404491fd7537","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0074e46caf6e41a18e3d0f5443787c1bc06e1d75e37fd459fd30404491fd7537/rootfs","created":"2025-12-19T03:04:58.065318041Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"0074e46caf6e41a18e3d0f5443787c1bc06e1d75e37fd459fd30404491fd7537","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-default-k8s-diff-port-103644_50f4d1ce4fca33a4531f882f5fb97a4e","io.kubernetes.cri.sa
ndbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-default-k8s-diff-port-103644","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"50f4d1ce4fca33a4531f882f5fb97a4e"},"owner":"root"},{"ociVersion":"1.2.1","id":"19baa8a9717c29181666ffb3efdceea762fabc94b416863c5b53dc5a31a91c9c","pid":981,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/19baa8a9717c29181666ffb3efdceea762fabc94b416863c5b53dc5a31a91c9c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/19baa8a9717c29181666ffb3efdceea762fabc94b416863c5b53dc5a31a91c9c/rootfs","created":"2025-12-19T03:04:58.375811399Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.34.3","io.kubernetes.cri.sandbox-id":"6f1ccb334eeac83f482c9566bcf07d1b7f0691e38f246bd1e2b18a839382f2be","io.kubernetes.cri.sandbox-name":"kube-controller-manager-
default-k8s-diff-port-103644","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"ac53bb8a0832eefbaa4a648be6aad901"},"owner":"root"},{"ociVersion":"1.2.1","id":"6f1ccb334eeac83f482c9566bcf07d1b7f0691e38f246bd1e2b18a839382f2be","pid":855,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f1ccb334eeac83f482c9566bcf07d1b7f0691e38f246bd1e2b18a839382f2be","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f1ccb334eeac83f482c9566bcf07d1b7f0691e38f246bd1e2b18a839382f2be/rootfs","created":"2025-12-19T03:04:58.067793692Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"6f1ccb334eeac83f482c9566bcf07d1b7f0691e38f246bd1e2b18a839382f2be","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube
-system_kube-controller-manager-default-k8s-diff-port-103644_ac53bb8a0832eefbaa4a648be6aad901","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-default-k8s-diff-port-103644","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"ac53bb8a0832eefbaa4a648be6aad901"},"owner":"root"},{"ociVersion":"1.2.1","id":"6fa84325f8005a8cda67fb08f0d9185af05913fd2ed0468866c0616c98bcca9f","pid":834,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6fa84325f8005a8cda67fb08f0d9185af05913fd2ed0468866c0616c98bcca9f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6fa84325f8005a8cda67fb08f0d9185af05913fd2ed0468866c0616c98bcca9f/rootfs","created":"2025-12-19T03:04:58.050783422Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernet
es.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"6fa84325f8005a8cda67fb08f0d9185af05913fd2ed0468866c0616c98bcca9f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-default-k8s-diff-port-103644_996cf4b38188d4b0d664648ad2102013","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-default-k8s-diff-port-103644","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"996cf4b38188d4b0d664648ad2102013"},"owner":"root"},{"ociVersion":"1.2.1","id":"84008f4f31b82c848a4941fccaee8d78c5d94eb07bb5b35b53ea6f10f5e460bc","pid":796,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/84008f4f31b82c848a4941fccaee8d78c5d94eb07bb5b35b53ea6f10f5e460bc","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/84008f4f31b82c848a4941fccaee8d78c5d94eb07bb5b35b53ea6f10f5e460bc/rootfs","created":"2025-12-19T03:04:58.031779484Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","
io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"84008f4f31b82c848a4941fccaee8d78c5d94eb07bb5b35b53ea6f10f5e460bc","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-default-k8s-diff-port-103644_4275d7c883d3f735b8de47264bc63415","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-default-k8s-diff-port-103644","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"4275d7c883d3f735b8de47264bc63415"},"owner":"root"},{"ociVersion":"1.2.1","id":"a8858dc4fe6cf1222bb421499d0aaa734e7a15be6fd8f63ebd4a4f91de0515c1","pid":951,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a8858dc4fe6cf1222bb421499d0aaa734e7a15be6fd8f63ebd4a4f91de0515c1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a8858dc4fe6cf1222bb4214
99d0aaa734e7a15be6fd8f63ebd4a4f91de0515c1/rootfs","created":"2025-12-19T03:04:58.294875595Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.34.3","io.kubernetes.cri.sandbox-id":"6fa84325f8005a8cda67fb08f0d9185af05913fd2ed0468866c0616c98bcca9f","io.kubernetes.cri.sandbox-name":"kube-apiserver-default-k8s-diff-port-103644","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"996cf4b38188d4b0d664648ad2102013"},"owner":"root"},{"ociVersion":"1.2.1","id":"c2591b42ec56d4e729eb5050bae8a11599a1d848e2b407c779bd43019d38acd7","pid":969,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c2591b42ec56d4e729eb5050bae8a11599a1d848e2b407c779bd43019d38acd7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c2591b42ec56d4e729eb5050bae8a11599a1d848e2b407c779bd43019d38acd7/rootfs","created":"2025-12-19T03:04:58.293243949Z","
annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.34.3","io.kubernetes.cri.sandbox-id":"0074e46caf6e41a18e3d0f5443787c1bc06e1d75e37fd459fd30404491fd7537","io.kubernetes.cri.sandbox-name":"kube-scheduler-default-k8s-diff-port-103644","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"50f4d1ce4fca33a4531f882f5fb97a4e"},"owner":"root"},{"ociVersion":"1.2.1","id":"fc945986f0d8b51799666a6eb31fb812b90a17842cc95aa6d352ef48b62f0652","pid":915,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fc945986f0d8b51799666a6eb31fb812b90a17842cc95aa6d352ef48b62f0652","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fc945986f0d8b51799666a6eb31fb812b90a17842cc95aa6d352ef48b62f0652/rootfs","created":"2025-12-19T03:04:58.225549561Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"co
ntainer","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.6.5-0","io.kubernetes.cri.sandbox-id":"84008f4f31b82c848a4941fccaee8d78c5d94eb07bb5b35b53ea6f10f5e460bc","io.kubernetes.cri.sandbox-name":"etcd-default-k8s-diff-port-103644","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"4275d7c883d3f735b8de47264bc63415"},"owner":"root"}]
	I1219 03:04:58.726160  573699 cri.go:129] list returned 8 containers
	I1219 03:04:58.726176  573699 cri.go:132] container: {ID:0074e46caf6e41a18e3d0f5443787c1bc06e1d75e37fd459fd30404491fd7537 Status:running}
	I1219 03:04:58.726215  573699 cri.go:134] skipping 0074e46caf6e41a18e3d0f5443787c1bc06e1d75e37fd459fd30404491fd7537 - not in ps
	I1219 03:04:58.726225  573699 cri.go:132] container: {ID:19baa8a9717c29181666ffb3efdceea762fabc94b416863c5b53dc5a31a91c9c Status:running}
	I1219 03:04:58.726238  573699 cri.go:138] skipping {19baa8a9717c29181666ffb3efdceea762fabc94b416863c5b53dc5a31a91c9c running}: state = "running", want "paused"
	I1219 03:04:58.726253  573699 cri.go:132] container: {ID:6f1ccb334eeac83f482c9566bcf07d1b7f0691e38f246bd1e2b18a839382f2be Status:running}
	I1219 03:04:58.726263  573699 cri.go:134] skipping 6f1ccb334eeac83f482c9566bcf07d1b7f0691e38f246bd1e2b18a839382f2be - not in ps
	I1219 03:04:58.726272  573699 cri.go:132] container: {ID:6fa84325f8005a8cda67fb08f0d9185af05913fd2ed0468866c0616c98bcca9f Status:running}
	I1219 03:04:58.726282  573699 cri.go:134] skipping 6fa84325f8005a8cda67fb08f0d9185af05913fd2ed0468866c0616c98bcca9f - not in ps
	I1219 03:04:58.726287  573699 cri.go:132] container: {ID:84008f4f31b82c848a4941fccaee8d78c5d94eb07bb5b35b53ea6f10f5e460bc Status:running}
	I1219 03:04:58.726296  573699 cri.go:134] skipping 84008f4f31b82c848a4941fccaee8d78c5d94eb07bb5b35b53ea6f10f5e460bc - not in ps
	I1219 03:04:58.726300  573699 cri.go:132] container: {ID:a8858dc4fe6cf1222bb421499d0aaa734e7a15be6fd8f63ebd4a4f91de0515c1 Status:running}
	I1219 03:04:58.726310  573699 cri.go:138] skipping {a8858dc4fe6cf1222bb421499d0aaa734e7a15be6fd8f63ebd4a4f91de0515c1 running}: state = "running", want "paused"
	I1219 03:04:58.726317  573699 cri.go:132] container: {ID:c2591b42ec56d4e729eb5050bae8a11599a1d848e2b407c779bd43019d38acd7 Status:running}
	I1219 03:04:58.726327  573699 cri.go:138] skipping {c2591b42ec56d4e729eb5050bae8a11599a1d848e2b407c779bd43019d38acd7 running}: state = "running", want "paused"
	I1219 03:04:58.726334  573699 cri.go:132] container: {ID:fc945986f0d8b51799666a6eb31fb812b90a17842cc95aa6d352ef48b62f0652 Status:running}
	I1219 03:04:58.726341  573699 cri.go:138] skipping {fc945986f0d8b51799666a6eb31fb812b90a17842cc95aa6d352ef48b62f0652 running}: state = "running", want "paused"
	I1219 03:04:58.726406  573699 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1219 03:04:58.736002  573699 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1219 03:04:58.736024  573699 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1219 03:04:58.736083  573699 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1219 03:04:58.745325  573699 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1219 03:04:58.746851  573699 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-103644" does not appear in /home/jenkins/minikube-integration/22230-253859/kubeconfig
	I1219 03:04:58.747840  573699 kubeconfig.go:62] /home/jenkins/minikube-integration/22230-253859/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-103644" cluster setting kubeconfig missing "default-k8s-diff-port-103644" context setting]
	I1219 03:04:58.749236  573699 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-253859/kubeconfig: {Name:mk882428f840659847d4e22cffee2d7775067610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:04:58.751783  573699 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1219 03:04:58.761185  573699 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1219 03:04:58.761233  573699 kubeadm.go:602] duration metric: took 25.202742ms to restartPrimaryControlPlane
	I1219 03:04:58.761245  573699 kubeadm.go:403] duration metric: took 98.23938ms to StartCluster
	I1219 03:04:58.761266  573699 settings.go:142] acquiring lock: {Name:mkabb1ebf75b28a37c7b2b053110889b555ff453 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:04:58.761344  573699 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-253859/kubeconfig
	I1219 03:04:58.763956  573699 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-253859/kubeconfig: {Name:mk882428f840659847d4e22cffee2d7775067610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:04:58.764278  573699 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1219 03:04:58.764352  573699 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:04:58.764458  573699 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-103644"
	I1219 03:04:58.764482  573699 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-103644"
	W1219 03:04:58.764491  573699 addons.go:248] addon storage-provisioner should already be in state true
	I1219 03:04:58.764498  573699 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-103644"
	I1219 03:04:58.764518  573699 config.go:182] Loaded profile config "default-k8s-diff-port-103644": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 03:04:58.764533  573699 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-103644"
	I1219 03:04:58.764530  573699 addons.go:70] Setting metrics-server=true in profile "default-k8s-diff-port-103644"
	I1219 03:04:58.764551  573699 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-103644"
	I1219 03:04:58.764557  573699 addons.go:239] Setting addon metrics-server=true in "default-k8s-diff-port-103644"
	I1219 03:04:58.764521  573699 host.go:66] Checking if "default-k8s-diff-port-103644" exists ...
	I1219 03:04:58.764523  573699 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-103644"
	W1219 03:04:58.764565  573699 addons.go:248] addon metrics-server should already be in state true
	I1219 03:04:58.764660  573699 host.go:66] Checking if "default-k8s-diff-port-103644" exists ...
	I1219 03:04:58.764898  573699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103644 --format={{.State.Status}}
	I1219 03:04:58.765067  573699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103644 --format={{.State.Status}}
	W1219 03:04:58.764563  573699 addons.go:248] addon dashboard should already be in state true
	I1219 03:04:58.765224  573699 host.go:66] Checking if "default-k8s-diff-port-103644" exists ...
	I1219 03:04:58.765244  573699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103644 --format={{.State.Status}}
	I1219 03:04:58.765778  573699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103644 --format={{.State.Status}}
	I1219 03:04:58.766439  573699 out.go:179] * Verifying Kubernetes components...
	I1219 03:04:58.769848  573699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:04:58.795158  573699 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-103644"
	W1219 03:04:58.795295  573699 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:04:58.795354  573699 host.go:66] Checking if "default-k8s-diff-port-103644" exists ...
	I1219 03:04:58.796260  573699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103644 --format={{.State.Status}}
	I1219 03:04:58.798810  573699 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1219 03:04:58.798816  573699 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:04:57.865290  566718 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:04:57.865322  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:58.373051  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:58.867408  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:59.364332  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:55.245497  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:55.746387  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:56.245217  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:56.749455  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:57.246279  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:57.748208  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:58.247627  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:58.745395  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:59.247400  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:59.747210  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:58.799225  573699 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:04:58.799247  573699 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:04:58.799304  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:58.799993  573699 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1219 03:04:58.800017  573699 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1219 03:04:58.800075  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:58.800356  573699 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:04:58.800371  573699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:04:58.800429  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:58.837919  573699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/default-k8s-diff-port-103644/id_rsa Username:docker}
	I1219 03:04:58.838753  573699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/default-k8s-diff-port-103644/id_rsa Username:docker}
	I1219 03:04:58.846681  573699 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:04:58.846725  573699 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:04:58.846799  573699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103644
	I1219 03:04:58.869014  573699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/default-k8s-diff-port-103644/id_rsa Username:docker}
	I1219 03:04:58.891596  573699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/default-k8s-diff-port-103644/id_rsa Username:docker}
	I1219 03:04:58.990117  573699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:04:59.008626  573699 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:04:59.009409  573699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:04:59.016187  573699 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-103644" to be "Ready" ...
	I1219 03:04:59.016907  573699 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:04:59.044939  573699 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1219 03:04:59.044973  573699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1219 03:04:59.048120  573699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:04:59.087063  573699 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1219 03:04:59.087153  573699 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1219 03:04:59.114132  573699 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:04:59.114163  573699 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1219 03:04:59.144085  573699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:05:00.372562  573699 node_ready.go:49] node "default-k8s-diff-port-103644" is "Ready"
	I1219 03:05:00.372622  573699 node_ready.go:38] duration metric: took 1.356373278s for node "default-k8s-diff-port-103644" to be "Ready" ...
	I1219 03:05:00.372644  573699 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:05:00.372706  573699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:04:57.979521  569947 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:04:57.983495  569947 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:04:57.983523  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:58.489816  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:58.984080  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:59.484148  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:59.983915  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:00.484939  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:00.985080  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:01.486418  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:01.986557  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:02.484684  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:04:59.866115  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:00.365239  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:00.866184  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:01.366415  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:01.863549  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:02.364375  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:02.863998  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:03.363890  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:03.863749  566718 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:04.382768  566718 kapi.go:107] duration metric: took 12.523639555s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	I1219 03:05:04.433515  566718 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-002036 addons enable metrics-server
	
	I1219 03:05:04.435631  566718 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1219 03:05:04.437408  566718 addons.go:546] duration metric: took 22.668379604s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I1219 03:05:04.437463  566718 start.go:247] waiting for cluster config update ...
	I1219 03:05:04.437482  566718 start.go:256] writing updated cluster config ...
	I1219 03:05:04.437853  566718 ssh_runner.go:195] Run: rm -f paused
	I1219 03:05:04.443668  566718 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:05:04.450779  566718 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-l88tx" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:00.248093  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:00.749216  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:01.247778  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:01.747890  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:02.245449  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:02.746684  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:03.247359  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:03.746557  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:04.245966  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:04.746278  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:01.448117  573699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.43867528s)
	I1219 03:05:01.448182  573699 ssh_runner.go:235] Completed: test -f /usr/local/bin/helm: (2.431240621s)
	I1219 03:05:01.448196  573699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.399991052s)
	I1219 03:05:01.448260  573699 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:05:01.448385  573699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.304270108s)
	I1219 03:05:01.448406  573699 addons.go:500] Verifying addon metrics-server=true in "default-k8s-diff-port-103644"
	I1219 03:05:01.448485  573699 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.075756393s)
	I1219 03:05:01.448520  573699 api_server.go:72] duration metric: took 2.684209271s to wait for apiserver process to appear ...
	I1219 03:05:01.448536  573699 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:05:01.448558  573699 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1219 03:05:01.448716  573699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103644 --format={{.State.Status}}
	I1219 03:05:01.458744  573699 api_server.go:279] https://192.168.94.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:05:01.458783  573699 api_server.go:103] status: https://192.168.94.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:05:01.950069  573699 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1219 03:05:01.959300  573699 api_server.go:279] https://192.168.94.2:8444/healthz returned 200:
	ok
	I1219 03:05:01.960703  573699 api_server.go:141] control plane version: v1.34.3
	I1219 03:05:01.960739  573699 api_server.go:131] duration metric: took 512.19419ms to wait for apiserver health ...
	I1219 03:05:01.960751  573699 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:05:01.965477  573699 system_pods.go:59] 9 kube-system pods found
	I1219 03:05:01.965544  573699 system_pods.go:61] "coredns-66bc5c9577-86vsf" [d2b924f3-ac71-431b-a3e6-f85f1e0b94e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:01.965560  573699 system_pods.go:61] "etcd-default-k8s-diff-port-103644" [ececfad7-09c9-4851-9fda-c468648a6e3f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:05:01.965595  573699 system_pods.go:61] "kindnet-vgs5z" [3a78062f-cab2-4e56-bc36-33ecf9505255] Running
	I1219 03:05:01.965611  573699 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-103644" [c5859d2c-4337-4b88-a46f-695c3ac4f9c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:05:01.965623  573699 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-103644" [57334df1-410d-4993-936c-c6cf1604c166] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:05:01.965631  573699 system_pods.go:61] "kube-proxy-lgw6f" [3b4461b1-0b30-427d-9e31-107cea049612] Running
	I1219 03:05:01.965640  573699 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-103644" [e44d65aa-7d39-4020-b9d6-4473f92a8f90] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:05:01.965653  573699 system_pods.go:61] "metrics-server-746fcd58dc-tctv8" [37ff7895-b382-407b-9032-56a428173579] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:05:01.965660  573699 system_pods.go:61] "storage-provisioner" [f12460c5-0196-4171-a44f-31b13af14f9f] Running
	I1219 03:05:01.965670  573699 system_pods.go:74] duration metric: took 4.91154ms to wait for pod list to return data ...
	I1219 03:05:01.965682  573699 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:05:01.969223  573699 default_sa.go:45] found service account: "default"
	I1219 03:05:01.969255  573699 default_sa.go:55] duration metric: took 3.563468ms for default service account to be created ...
	I1219 03:05:01.969269  573699 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:05:01.973647  573699 system_pods.go:86] 9 kube-system pods found
	I1219 03:05:01.973775  573699 system_pods.go:89] "coredns-66bc5c9577-86vsf" [d2b924f3-ac71-431b-a3e6-f85f1e0b94e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:01.973790  573699 system_pods.go:89] "etcd-default-k8s-diff-port-103644" [ececfad7-09c9-4851-9fda-c468648a6e3f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:05:01.973797  573699 system_pods.go:89] "kindnet-vgs5z" [3a78062f-cab2-4e56-bc36-33ecf9505255] Running
	I1219 03:05:01.973804  573699 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-103644" [c5859d2c-4337-4b88-a46f-695c3ac4f9c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:05:01.973810  573699 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-103644" [57334df1-410d-4993-936c-c6cf1604c166] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:05:01.973828  573699 system_pods.go:89] "kube-proxy-lgw6f" [3b4461b1-0b30-427d-9e31-107cea049612] Running
	I1219 03:05:01.973834  573699 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-103644" [e44d65aa-7d39-4020-b9d6-4473f92a8f90] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:05:01.973840  573699 system_pods.go:89] "metrics-server-746fcd58dc-tctv8" [37ff7895-b382-407b-9032-56a428173579] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:05:01.973843  573699 system_pods.go:89] "storage-provisioner" [f12460c5-0196-4171-a44f-31b13af14f9f] Running
	I1219 03:05:01.973852  573699 system_pods.go:126] duration metric: took 4.574679ms to wait for k8s-apps to be running ...
	I1219 03:05:01.973859  573699 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:05:01.973912  573699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:05:02.653061  573699 ssh_runner.go:235] Completed: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh": (1.204735295s)
	I1219 03:05:02.653137  573699 system_svc.go:56] duration metric: took 679.266214ms WaitForService to wait for kubelet
	I1219 03:05:02.653168  573699 kubeadm.go:587] duration metric: took 3.888855367s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:05:02.653197  573699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:05:02.653199  573699 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:05:02.656332  573699 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:05:02.656365  573699 node_conditions.go:123] node cpu capacity is 8
	I1219 03:05:02.656382  573699 node_conditions.go:105] duration metric: took 3.090983ms to run NodePressure ...
	I1219 03:05:02.656398  573699 start.go:242] waiting for startup goroutines ...
	I1219 03:05:05.900902  573699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.247656336s)
	I1219 03:05:05.901008  573699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:05:06.370072  573699 addons.go:500] Verifying addon dashboard=true in "default-k8s-diff-port-103644"
	I1219 03:05:06.370443  573699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103644 --format={{.State.Status}}
	I1219 03:05:06.413077  573699 out.go:179] * Verifying dashboard addon...
	I1219 03:05:02.984573  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:03.483377  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:03.983965  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:04.483784  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:04.983862  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:05.484412  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:05.985034  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:06.484458  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:06.983536  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:07.484546  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	W1219 03:05:06.463527  566718 pod_ready.go:104] pod "coredns-5dd5756b68-l88tx" is not "Ready", error: <nil>
	W1219 03:05:08.958366  566718 pod_ready.go:104] pod "coredns-5dd5756b68-l88tx" is not "Ready", error: <nil>
	I1219 03:05:05.245656  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:05.747655  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:06.245722  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:06.748049  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:07.245806  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:07.806712  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:08.317551  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:08.746359  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:09.246666  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:09.745789  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:06.432631  573699 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:05:06.442236  573699 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:05:06.442267  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:06.938273  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:07.436226  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:07.935844  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:08.437222  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:08.937396  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:09.436432  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:09.937420  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:10.436795  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:07.982775  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:08.484705  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:08.983943  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:09.483954  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:09.984850  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:10.484036  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:10.985868  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:11.484253  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:11.984283  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:12.483325  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	W1219 03:05:11.457419  566718 pod_ready.go:104] pod "coredns-5dd5756b68-l88tx" is not "Ready", error: <nil>
	W1219 03:05:13.957361  566718 pod_ready.go:104] pod "coredns-5dd5756b68-l88tx" is not "Ready", error: <nil>
	I1219 03:05:10.247114  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:10.746688  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:11.246179  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:11.747053  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:12.245687  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:12.745963  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:13.245905  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:13.745641  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:14.245181  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:14.746110  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:10.937352  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:11.436009  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:11.937001  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:12.437140  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:12.937021  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:13.436775  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:13.936272  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:14.435921  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:14.937045  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:15.436754  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:12.983838  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:13.483669  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:13.983389  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:14.483140  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:14.983678  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:15.483333  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:15.983426  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:16.483195  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:16.982683  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:17.483883  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	W1219 03:05:16.457830  566718 pod_ready.go:104] pod "coredns-5dd5756b68-l88tx" is not "Ready", error: <nil>
	W1219 03:05:18.956955  566718 pod_ready.go:104] pod "coredns-5dd5756b68-l88tx" is not "Ready", error: <nil>
	I1219 03:05:15.245238  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:15.746028  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:16.245738  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:16.746152  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:17.245944  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:17.745478  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:18.244810  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:18.745484  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:19.245267  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:19.747027  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:15.935367  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:16.437144  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:16.936697  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:17.436257  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:17.938151  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:18.436806  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:18.936368  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:19.436056  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:19.936823  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:20.436574  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:20.956728  566718 pod_ready.go:94] pod "coredns-5dd5756b68-l88tx" is "Ready"
	I1219 03:05:20.956755  566718 pod_ready.go:86] duration metric: took 16.505943894s for pod "coredns-5dd5756b68-l88tx" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:20.959784  566718 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-002036" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:20.964097  566718 pod_ready.go:94] pod "etcd-old-k8s-version-002036" is "Ready"
	I1219 03:05:20.964121  566718 pod_ready.go:86] duration metric: took 4.312579ms for pod "etcd-old-k8s-version-002036" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:20.967209  566718 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-002036" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:20.971311  566718 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-002036" is "Ready"
	I1219 03:05:20.971340  566718 pod_ready.go:86] duration metric: took 4.107095ms for pod "kube-apiserver-old-k8s-version-002036" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:20.974403  566718 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-002036" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:21.155192  566718 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-002036" is "Ready"
	I1219 03:05:21.155230  566718 pod_ready.go:86] duration metric: took 180.802142ms for pod "kube-controller-manager-old-k8s-version-002036" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:21.356374  566718 pod_ready.go:83] waiting for pod "kube-proxy-666m9" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:21.755068  566718 pod_ready.go:94] pod "kube-proxy-666m9" is "Ready"
	I1219 03:05:21.755101  566718 pod_ready.go:86] duration metric: took 398.695005ms for pod "kube-proxy-666m9" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:21.955309  566718 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-002036" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:22.355240  566718 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-002036" is "Ready"
	I1219 03:05:22.355268  566718 pod_ready.go:86] duration metric: took 399.930732ms for pod "kube-scheduler-old-k8s-version-002036" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:22.355280  566718 pod_ready.go:40] duration metric: took 17.911572961s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:05:22.403101  566718 start.go:625] kubectl: 1.35.0, cluster: 1.28.0 (minor skew: 7)
	I1219 03:05:22.405195  566718 out.go:203] 
	W1219 03:05:22.406549  566718 out.go:285] ! /usr/local/bin/kubectl is version 1.35.0, which may have incompatibilities with Kubernetes 1.28.0.
	I1219 03:05:22.407721  566718 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1219 03:05:22.409075  566718 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-002036" cluster and "default" namespace by default
	I1219 03:05:17.983934  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:18.483978  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:18.983469  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:19.483031  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:19.983935  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:20.483856  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:20.983202  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:21.482976  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:21.983682  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:22.483477  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:20.246405  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:20.745732  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:21.246211  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:21.745513  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:22.246072  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:22.746161  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:23.245454  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:23.745802  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:24.246011  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:24.745886  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:20.936632  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:21.436414  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:21.937387  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:22.438356  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:22.936036  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:23.436638  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:23.936345  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:24.436285  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:24.936343  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:25.436214  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:22.983526  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:23.483608  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:23.984007  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:24.483768  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:24.983330  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:25.483626  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:25.983245  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:26.483703  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:26.983688  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:27.483645  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:25.245298  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:25.745913  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:26.246357  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:26.746837  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:27.245727  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:27.745064  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:28.245698  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:28.745390  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:29.245749  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:29.746545  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:25.936442  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:26.436179  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:26.936807  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:27.436692  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:27.936427  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:28.436416  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:28.936100  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:29.436165  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:29.936887  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:30.437744  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:27.983729  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:28.484151  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:28.982796  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:29.483575  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:29.983807  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:30.484546  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:30.983311  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:31.482703  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:31.984055  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:32.483041  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:30.245841  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:30.746191  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:31.246984  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:31.746555  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:32.245535  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:32.745692  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:33.245430  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:33.746001  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:34.245532  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:34.745216  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:30.936806  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:31.437044  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:31.937073  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:32.436137  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:32.937365  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:33.436414  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:33.936352  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:34.435813  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:34.936438  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:35.435923  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:32.984055  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:33.483382  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:33.984500  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:34.483032  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:34.984071  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:35.482466  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:35.983161  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:36.482900  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:36.983524  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:37.483681  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:35.245754  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:35.745276  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:36.246044  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:36.747272  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:37.246098  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:37.746535  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:38.245821  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:38.745937  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:39.245762  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:39.745615  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:35.936381  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:36.436916  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:36.936622  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:37.436000  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:37.937259  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:38.437162  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:38.937047  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:39.437352  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:39.936682  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:40.436615  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:37.983600  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:38.483773  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:38.983567  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:39.483752  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:39.983264  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:40.483703  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:40.983322  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:41.483362  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:41.983957  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:42.484274  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:40.246185  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:40.745901  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:41.245621  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:41.746459  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:42.246128  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:42.745336  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:43.245863  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:43.745349  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:44.245848  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:44.745183  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:40.938808  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:41.437447  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:41.936560  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:42.436119  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:42.935681  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:43.436727  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:43.936625  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:44.436379  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:44.936023  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:45.436852  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:42.983002  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:43.484428  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:43.983133  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:44.484439  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:44.983087  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:45.483617  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:45.983057  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:46.483126  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:46.982743  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:47.483122  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:45.245621  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:45.747099  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:46.245089  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:46.746901  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:47.245684  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:47.745166  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:48.245353  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:48.745700  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:49.245083  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:49.745319  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:45.936637  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:46.436382  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:46.935972  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:47.436262  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:47.937175  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:48.435775  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:48.936174  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:49.436927  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:49.936454  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:50.436467  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:47.983769  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:48.483797  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:48.983564  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:49.484562  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:49.983390  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:50.483073  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:50.984121  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:51.482952  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:51.983943  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:52.483850  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:50.245533  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:50.746378  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:51.246407  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:51.746164  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:52.245475  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:52.746473  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:53.245686  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:53.745616  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:54.246701  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:54.746221  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:50.937461  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:51.437102  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:51.937100  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:52.436658  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:52.936554  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:53.436723  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:53.935859  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:54.436301  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:54.936888  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:55.435693  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:52.983429  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:53.484287  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:53.983438  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:54.484255  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:54.982975  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:55.483937  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:55.984116  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:56.483076  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:56.982483  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:57.484172  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:55.245635  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:55.746068  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:56.245613  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:56.746431  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:57.245784  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:57.746179  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:58.246036  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:58.745916  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.246105  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.745511  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:55.936404  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:56.436974  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:56.937181  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:57.436933  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:57.936461  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:58.435893  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:58.936715  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.435977  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.936537  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:00.436413  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:57.984117  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:58.483494  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:58.983431  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.483144  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.983693  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:00.483725  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:00.983769  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:01.483568  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:01.983844  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:02.484041  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:00.247210  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:00.745901  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:01.246917  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:01.746507  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:02.246482  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:02.745791  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:03.246149  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:03.745750  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:04.246542  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:04.746182  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:00.935753  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:01.437052  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:01.936399  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:02.437035  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:02.936175  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:03.437157  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:03.936167  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:04.437079  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:04.936622  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:05.435994  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:02.984052  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:03.484159  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:03.983491  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:04.483027  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:04.984206  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:05.482988  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:05.984416  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:06.482988  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:06.983673  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:07.483363  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:05.245974  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:05.745879  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:06.246325  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:06.746954  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:07.246178  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:07.746530  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:08.246617  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:08.746319  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:09.246086  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:09.745852  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:05.937050  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:06.436626  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:06.935960  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:07.436359  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:07.936462  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:08.436428  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:08.936121  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:09.436717  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:09.936724  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:10.436653  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:07.983609  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:08.483348  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:08.983602  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:09.483970  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:09.984565  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:10.483846  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:10.983764  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:11.483646  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:11.983995  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:12.483230  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:10.246294  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:10.746747  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:11.245812  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:11.746679  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:12.246641  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:12.745478  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:13.245869  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:13.745759  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:14.245568  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:14.746073  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:10.936517  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:11.435795  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:11.937696  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:12.436353  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:12.935510  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:13.436005  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:13.936614  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:14.436666  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:14.937104  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:15.436494  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:12.982961  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:13.483812  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:13.984205  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:14.484367  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:14.983535  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:15.483245  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:15.982974  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:16.483840  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:16.983639  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:17.483076  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:15.245741  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:15.746076  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:16.245863  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:16.746268  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:17.245914  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:17.745460  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:18.246201  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:18.745720  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:19.246075  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:19.746406  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:15.936573  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:16.436355  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:16.935609  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:17.436112  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:17.936695  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:18.436177  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:18.936615  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:19.436180  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:19.936693  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:20.436473  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:17.984187  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:18.484214  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:18.983011  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:19.483899  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:19.984512  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:20.482716  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:20.983406  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:21.483536  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:21.985122  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:22.483290  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:20.246645  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:20.746554  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:21.245477  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:21.746237  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:22.246559  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:22.746156  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:23.245694  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:23.744920  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:24.246400  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:24.745171  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:20.936301  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:21.435818  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:21.936473  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:22.436319  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:22.937291  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:23.436967  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:23.936394  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:24.436573  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:24.936226  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:25.436480  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:22.983215  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:23.483166  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:23.983561  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:24.484067  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:24.983180  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:25.483488  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:25.983441  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:26.482752  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:26.983544  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:27.482808  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:25.245475  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:25.746511  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:26.245967  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:26.746303  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:27.245996  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:27.745286  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:28.246778  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:28.745279  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:29.245781  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:29.745086  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:25.936101  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:26.437131  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:26.936600  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:27.436041  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:27.937177  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:28.437421  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:28.935735  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:29.436019  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:29.936601  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:30.437190  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:27.984252  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:28.483837  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:28.983552  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:29.483241  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:29.983514  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:30.482704  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:30.983246  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:31.482944  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:31.984320  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:32.483797  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:30.246209  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:30.745803  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:31.245503  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:31.746263  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:32.246768  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:32.745863  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:33.245185  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:33.745549  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:34.245747  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:34.746416  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:30.935759  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:31.435954  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:31.936473  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:32.436706  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:32.936420  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:33.436605  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:33.937043  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:34.437152  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:34.936804  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:35.436211  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:32.983286  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:33.483036  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:33.984179  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:34.485767  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:34.983683  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:35.484067  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:35.983146  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:36.483037  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:36.982889  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:37.483748  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:35.245980  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:35.745904  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:36.246066  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:36.747073  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:37.246061  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:37.746010  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:38.246278  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:38.745926  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:39.245654  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:39.745463  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:35.935859  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:36.437530  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:36.936253  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:37.436942  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:37.936253  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:38.437229  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:38.936794  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:39.436501  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:39.936447  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:40.436258  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:37.983789  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:38.483692  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:38.983255  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:39.483001  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:39.982877  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:40.483721  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:40.983399  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:41.482771  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:41.983968  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:42.483847  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:40.246603  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:40.745229  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:41.245985  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:41.746233  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:42.246354  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:42.746354  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:43.245729  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:43.745993  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:44.246174  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:44.745977  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:40.936569  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:41.436604  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:41.936997  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:42.436608  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:42.936332  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:43.436076  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:43.937096  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:44.437052  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:44.936644  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:45.436313  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:42.983561  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:43.483231  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:43.983328  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:44.483130  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:44.983671  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:45.484255  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:45.984498  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:46.483267  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:46.982818  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:47.483172  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:45.246007  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:45.745814  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:46.246281  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:46.746636  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:47.245338  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:47.746505  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:48.246541  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:48.745349  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:49.246003  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:49.746025  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:45.935627  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:46.437425  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:46.937256  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:47.436775  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:47.936905  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:48.436271  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:48.936170  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:49.436681  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:49.937261  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:50.436230  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:47.983908  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:48.483646  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:48.983761  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:49.483697  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:49.983928  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:50.484339  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:50.983038  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:51.483830  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:51.983519  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:52.482841  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:50.246203  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:50.745909  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:51.245212  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:51.746317  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:52.246429  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:52.746706  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:53.245252  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:53.746054  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:54.248935  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:54.745879  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:50.936569  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:51.436150  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:51.937541  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:52.436306  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:52.937380  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:53.437032  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:53.937256  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:54.437101  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:54.936394  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:55.435707  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:52.983425  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:53.482996  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:53.984413  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:54.483150  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:54.983223  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:55.483220  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:55.983167  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:56.482640  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:56.983417  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:57.483783  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:55.245215  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:55.745981  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:56.246277  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:56.747053  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:57.245861  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:57.745707  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:58.245371  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:58.746200  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:59.245515  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:59.745912  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:55.936135  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:56.437841  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:56.936910  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:57.436323  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:57.936660  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:58.436524  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:58.936221  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:59.436563  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:59.935913  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:00.436645  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:57.984125  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:58.483388  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:58.982737  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:59.483773  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:59.983545  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:00.483422  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:00.983154  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:01.483664  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:01.983641  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:02.483442  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:00.245728  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:00.745308  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:01.246025  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:01.745765  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:02.246408  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:02.746848  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:03.245127  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:03.746104  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:04.246223  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:04.745981  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:00.936306  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:01.437231  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:01.937148  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:02.437052  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:02.936729  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:03.436019  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:03.936896  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:04.436656  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:04.936521  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:05.435899  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:02.983225  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:03.483720  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:03.983254  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:04.483258  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:04.983295  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:05.483964  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:05.984348  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:06.483161  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:06.983777  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:07.483360  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:05.245839  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:05.745805  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:06.245976  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:06.745981  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:07.245994  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:07.745694  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:08.245465  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:08.748052  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:09.245632  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:09.745648  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:05.936748  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:06.436721  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:06.935970  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:07.436670  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:07.936857  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:08.436351  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:08.936092  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:09.436265  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:09.936566  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:10.437204  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:07.983552  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:08.483025  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:08.984084  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:09.482696  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:09.984384  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:10.482907  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:10.983542  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:11.483867  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:11.983960  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:12.484193  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:10.246433  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:10.745045  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:11.245844  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:11.745925  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:12.245788  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:12.745757  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:13.245844  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:13.744949  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:14.245762  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:14.745558  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:10.936675  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:11.436272  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:11.937272  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:12.436971  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:12.936377  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:13.435972  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:13.936779  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:14.436521  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:14.936619  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:15.436449  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:12.983555  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:13.483112  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:13.983119  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:14.483571  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:14.983564  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:15.483968  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:15.985107  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:16.482973  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:16.983852  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:17.483706  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:15.246146  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:15.745442  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:16.245478  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:16.745851  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:17.245620  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:17.745179  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:18.245868  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:18.746515  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:19.245146  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:19.746353  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:15.937638  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:16.436053  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:16.936310  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:17.436971  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:17.936846  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:18.436790  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:18.936696  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:19.436200  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:19.936118  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:20.437161  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:17.983286  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:18.483618  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:18.983321  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:19.484098  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:19.982957  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:20.484192  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:20.982797  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:21.483503  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:21.983073  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:22.483344  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:20.246025  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:20.745956  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:21.245670  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:21.745542  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:22.245486  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:22.745743  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:23.246417  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:23.746516  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:24.245958  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:24.746331  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:20.936566  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:21.435804  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:21.936340  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:22.436902  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:22.936275  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:23.437058  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:23.936691  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:24.436512  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:24.936664  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:25.436248  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:22.983164  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:23.483224  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:23.983637  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:24.483793  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:24.983642  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:25.484002  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:25.983546  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:26.483485  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:26.983175  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:27.483045  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:25.246376  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:25.746128  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:26.246162  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:26.747301  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:27.245957  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:27.745993  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:28.245413  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:28.746300  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:29.246016  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:29.745826  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:25.936102  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:26.436787  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:26.936813  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:27.436289  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:27.937146  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:28.437238  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:28.937126  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:29.436740  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:29.936271  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:30.437515  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:27.983720  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:28.484073  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:28.982865  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:29.483679  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:29.983626  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:30.484049  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:30.983790  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:31.483561  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:31.983415  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:32.483614  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:30.245267  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:30.746185  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:31.246095  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:31.746548  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:32.245436  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:32.745151  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:33.246297  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:33.746437  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:34.246245  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:34.746404  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:30.936298  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:31.436831  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:31.936714  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:32.436596  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:32.936067  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:33.436898  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:33.936839  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:34.436572  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:34.936153  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:35.436037  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:32.983547  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:33.483091  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:33.983273  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:34.483523  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:34.982933  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:35.483553  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:35.983907  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:36.484242  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:36.983005  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:37.483666  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:35.245928  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:35.745885  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:36.245358  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:36.747236  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:37.245813  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:37.745544  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:38.245252  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:38.746445  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:39.245380  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:39.746275  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:35.936921  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:36.436862  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:36.936442  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:37.437100  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:37.936746  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:38.436661  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:38.936108  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:39.436741  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:39.937134  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:40.437138  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:37.984072  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:38.483408  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:38.982980  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:39.483839  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:39.983815  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:40.484237  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:40.982748  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:41.483227  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:41.983491  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:42.483502  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:40.246302  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:40.746840  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:41.245743  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:41.745752  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:42.245764  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:42.745565  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:43.245413  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:43.745818  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:44.245622  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:44.746548  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:40.936117  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:41.436793  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:41.937328  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:42.436385  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:42.937184  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:43.437161  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:43.936755  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:44.436384  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:44.937437  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:45.436119  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:42.983935  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:43.483872  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:43.983964  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:44.484354  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:44.983693  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:45.483534  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:45.983273  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:46.483358  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:46.983949  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:47.483681  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:45.245051  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:45.745840  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:46.245710  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:46.747059  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:47.245761  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:47.746224  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:48.245979  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:48.746397  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:49.246462  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:49.745161  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:45.936393  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:46.435574  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:46.936269  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:47.436736  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:47.935923  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:48.436191  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:48.937125  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:49.436724  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:49.936060  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:50.436464  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:47.983875  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:48.483646  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:48.983702  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:49.483743  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:49.983649  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:50.484353  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:50.984025  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:51.484106  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:51.983289  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:52.483003  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:50.245241  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:50.746800  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:51.245636  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:51.745903  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:52.245501  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:52.746786  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:53.245828  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:53.746731  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:54.245243  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:54.746109  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:50.936423  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:51.436185  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:51.937335  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:52.435811  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:52.936607  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:53.437193  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:53.937024  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:54.436703  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:54.936452  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:55.436033  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:52.982921  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:53.483790  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:53.984334  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:54.483331  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:54.983338  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:55.483024  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:55.983619  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:56.483807  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:56.983721  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:57.483219  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:55.245525  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:55.745310  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:56.246066  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:56.748380  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:57.246087  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:57.746200  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:58.246172  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:58.746116  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:59.246000  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:59.745364  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:55.938959  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:56.436375  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:56.936439  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:57.435973  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:57.936388  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:58.435955  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:58.937067  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:59.436689  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:59.936873  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:00.436068  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:57.983216  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:58.483545  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:58.982893  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:59.483703  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:07:59.983507  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:00.483848  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:00.983741  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:01.483139  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:01.982982  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:02.483474  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:00.245849  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:00.745943  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:01.245514  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:01.745976  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:02.245776  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:02.745774  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:03.246195  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:03.746287  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:04.246174  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:04.745795  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:00.937291  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:01.436517  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:01.937126  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:02.437088  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:02.936378  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:03.435816  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:03.936486  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:04.436861  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:04.936773  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:05.437070  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:02.983196  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:03.482648  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:03.984096  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:04.483607  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:04.983532  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:05.483828  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:05.983686  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:06.484218  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:06.984889  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:07.484117  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:05.245432  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:05.746171  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:06.246148  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:06.746794  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:07.245134  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:07.745858  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:08.245332  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:08.746680  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:09.245744  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:09.745345  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:05.935722  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:06.437147  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:06.937110  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:07.436107  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:07.936683  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:08.437338  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:08.937224  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:09.435895  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:09.936364  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:10.436440  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:07.984241  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:08.483451  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:08.983165  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:09.483042  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:09.982951  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:10.484340  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:10.983004  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:11.483822  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:11.983489  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:12.483877  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:10.246451  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:10.746155  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:11.246021  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:11.745725  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:12.245017  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:12.747153  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:13.246746  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:13.745692  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:14.245869  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:14.745814  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:10.937288  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:11.436218  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:11.937058  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:12.436201  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:12.936942  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:13.436514  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:13.937227  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:14.435900  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:14.937246  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:15.437248  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:12.983685  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:13.483319  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:13.983759  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:14.483672  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:14.983171  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:15.482646  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:15.983174  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:16.483545  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:16.983864  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:17.484102  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:15.245723  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:15.745561  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:16.247817  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:16.747200  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:17.246180  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:17.746059  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:18.245772  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:18.746003  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:19.245769  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:19.745631  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:15.935465  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:16.436710  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:16.936296  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:17.436222  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:17.937015  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:18.437019  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:18.937083  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:19.436796  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:19.936995  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:20.437457  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:17.983571  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:18.483942  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:18.983638  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:19.483595  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:19.982839  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:20.484503  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:20.983773  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:21.483765  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:21.983647  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:22.483706  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:20.246047  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:20.746223  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:21.245764  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:21.746404  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:22.246013  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:22.745963  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:23.245843  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:23.745567  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:24.246427  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:24.746391  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:20.937102  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:21.435564  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:21.936469  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:22.436649  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:22.936880  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:23.436778  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:23.936059  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:24.437189  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:24.937170  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:25.436704  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:22.982868  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:23.484268  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:23.983374  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:24.483212  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:24.983344  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:25.483884  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:25.983398  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:26.484023  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:26.984234  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:27.483988  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:25.246093  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:25.745866  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:26.245647  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:26.747173  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:27.245862  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:27.745538  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:28.245299  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:28.746103  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:29.245506  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:29.746350  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:25.937269  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:26.435729  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:26.936734  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:27.436476  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:27.936918  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:28.436636  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:28.936510  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:29.436255  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:29.936175  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:30.436005  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:27.983312  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:28.484050  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:28.983339  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:29.482531  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:29.982929  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:30.483747  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:30.983500  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:31.482861  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:31.983253  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:32.484296  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:30.245816  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:30.745632  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:31.245311  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:31.746323  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:32.246307  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:32.746634  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:33.245352  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:33.746294  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:34.246399  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:34.746747  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:30.937031  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:31.436676  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:31.936840  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:32.436650  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:32.936793  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:33.436310  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:33.936030  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:34.437178  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:34.937165  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:35.436157  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:32.983447  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:33.484087  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:33.983935  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:34.484195  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:34.982889  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:35.483424  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:35.982827  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:36.483920  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:36.984144  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:37.484302  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:35.245293  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:35.746004  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:36.245793  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:36.746989  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:37.245794  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:37.746839  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:38.245459  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:38.746688  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:39.245861  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:39.745472  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:35.937370  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:36.435903  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:36.936747  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:37.436447  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:37.937054  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:38.437019  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:38.937481  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:39.436189  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:39.936333  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:40.436131  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:37.983136  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:38.484093  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:38.983753  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:39.483392  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:39.983335  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:40.483238  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:40.982643  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:41.483017  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:41.983148  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:42.484213  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:40.245696  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:40.745797  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:41.245831  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:41.745795  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:42.245558  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:42.745449  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:43.246006  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:43.746105  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:44.246305  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:44.746990  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:40.936241  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:41.436869  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:41.936851  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:42.436552  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:42.936544  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:43.436217  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:43.936790  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:44.435881  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:44.937211  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:45.435800  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:42.982609  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:43.483184  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:43.984245  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:44.483444  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:44.983516  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:45.482273  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:45.982784  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:46.483318  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:46.983225  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:47.484299  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:45.245057  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:45.745991  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:46.245705  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:46.746558  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:47.245976  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:47.745649  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:48.245488  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:48.745691  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:49.245062  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:49.745495  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:45.936018  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:46.437324  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:46.936366  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:47.436108  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:47.936330  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:48.435727  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:48.936825  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:49.436120  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:49.937117  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:50.436125  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:47.986039  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:48.483907  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:48.983409  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:49.483362  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:49.982827  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:50.483536  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:50.983035  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:51.483293  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:51.983566  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:52.483534  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:50.246060  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:50.746141  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:51.245517  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:51.745461  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:52.246136  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:52.746190  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:53.246005  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:53.745779  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:54.245690  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:54.746440  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:50.936858  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:51.436399  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:51.936936  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:52.436270  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:52.936040  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:53.436627  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:53.935956  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:54.436964  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:54.937066  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:55.437181  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:52.982975  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:53.483774  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:53.983188  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:54.484313  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:54.983476  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:55.483624  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:55.983235  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:56.484059  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:56.983666  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:57.483836  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:55.246365  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:55.746334  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:56.246033  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:56.746651  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:57.245323  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:57.746357  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:58.245635  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:58.745658  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:59.245395  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:59.745819  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:55.936516  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:56.436483  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:56.936444  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:57.436852  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:57.936892  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:58.436633  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:58.936620  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:59.436269  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:59.936896  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:00.436566  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:57.983297  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:58.484464  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:58.982982  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:59.483511  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:08:59.982836  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:00.483736  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:00.983424  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:01.483308  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:01.982575  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:02.483472  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:00.245397  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:00.746693  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:01.245417  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:01.745772  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:02.245937  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:02.745980  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:03.245966  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:03.745540  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:04.245125  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:04.746311  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:00.937461  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:01.436345  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:01.937223  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:02.436491  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:02.936542  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:03.436156  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:03.936757  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:04.436434  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:04.936827  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:05.437143  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:02.983140  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:03.483948  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:03.983404  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:04.484135  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:04.983017  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:05.483191  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:05.983258  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:06.483593  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:06.982879  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:07.482719  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:05.245937  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:05.745523  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:06.246156  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:06.746714  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:07.245457  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:07.745845  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:08.245496  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:08.745521  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:09.246211  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:09.745647  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:05.936297  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:06.435928  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:06.936499  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:07.435693  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:07.935885  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:08.436830  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:08.937053  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:09.436174  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:09.936555  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:10.436004  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:07.983540  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:08.483013  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:08.983280  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:09.483326  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:09.983039  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:10.483498  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:10.983057  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:11.483944  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:11.983380  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:12.483057  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:10.246452  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:10.746248  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:11.246124  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:11.746214  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:12.245557  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:12.746434  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:13.245268  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:13.746177  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:14.245924  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:14.747881  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:10.936969  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:11.436315  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:11.936145  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:12.435740  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:12.937011  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:13.437024  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:13.935613  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:14.436125  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:14.937024  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:15.436909  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:12.984340  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:13.483254  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:13.984703  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:14.483313  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:14.982835  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:15.483493  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:15.982869  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:16.483978  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:16.983946  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:17.483204  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:15.245275  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:15.746276  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:16.245920  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:16.746771  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:17.245651  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:17.746110  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:18.245525  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:18.744791  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:19.245637  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:19.745922  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:15.936545  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:16.436043  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:16.937153  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:17.435953  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:17.937080  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:18.435670  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:18.936110  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:19.435657  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:19.935804  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:20.436240  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:17.983897  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:18.483952  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:18.984052  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:19.484088  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:19.983714  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:20.483215  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:20.983277  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:21.483667  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:21.982875  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:22.483370  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:20.245437  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:20.745749  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:21.246263  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:21.746404  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:22.245277  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:22.745620  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:23.245283  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:23.745807  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:24.245525  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:24.745496  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:20.935998  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:21.436702  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:21.936853  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:22.436414  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:22.936508  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:23.435898  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:23.938866  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:24.436406  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:24.936267  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:25.435443  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:22.983387  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:23.483176  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:23.984078  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:24.483842  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:24.983908  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:25.483314  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:25.983685  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:26.482841  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:26.984025  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:27.483709  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:25.246278  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:25.746235  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:26.246283  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:26.746411  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:27.246592  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:27.745927  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:28.245680  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:28.745389  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:29.246386  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:29.745671  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:25.936495  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:26.436178  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:26.937066  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:27.435968  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:27.936852  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:28.436035  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:28.936880  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:29.436057  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:29.936860  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:30.436717  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:27.983478  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:28.483606  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:28.984122  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:29.490050  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:29.982603  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:30.483055  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:30.984015  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:31.483501  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:31.982832  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:32.483241  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:30.245020  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:30.745924  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:31.245930  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:31.745911  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:32.245548  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:32.745201  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:33.245713  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:33.745983  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:34.245893  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:34.745539  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:30.935985  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:31.436747  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:31.936740  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:32.436110  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:32.937088  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:33.436764  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:33.936466  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:34.436043  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:34.936724  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:35.436386  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:32.983173  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:33.483859  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:33.983142  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:34.483205  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:34.984166  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:35.483826  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:35.983185  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:36.484158  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:36.984358  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:37.482832  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:35.246393  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:35.745896  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:36.245850  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:36.746287  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:37.246273  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:37.747864  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:38.245616  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:38.745334  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:39.246449  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:39.744981  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:35.936971  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:36.436804  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:36.936804  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:37.436958  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:37.936877  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:38.436656  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:38.936136  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:39.435670  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:39.935933  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:40.436774  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:37.983744  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:38.482921  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:38.983872  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:39.483540  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:39.984141  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:40.483479  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:40.984063  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:41.483205  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:41.983552  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:42.483481  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:40.245548  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:40.746558  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:41.246611  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:41.745533  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:42.245131  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:42.746326  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:43.246887  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:43.745358  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:44.246189  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:44.745991  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:40.937573  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:41.435677  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:41.936406  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:42.435935  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:42.936714  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:43.436043  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:43.936827  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:44.435885  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:44.936556  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:45.436774  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:42.983487  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:43.483622  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:43.983361  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:44.482912  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:44.983873  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:45.482660  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:45.982839  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:46.483503  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:46.983067  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:47.483638  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:45.245846  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:45.746643  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:46.245931  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:46.746121  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:47.246355  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:47.745777  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:48.245928  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:48.745620  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:49.246014  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:49.745623  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:45.936490  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:46.437169  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:46.936638  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:47.435797  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:47.937106  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:48.436462  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:48.935673  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:49.435921  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:49.936345  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:50.435704  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:47.983064  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:48.483495  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:48.983383  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:49.482815  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:49.983133  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:50.483521  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:50.983458  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:51.483539  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:51.982669  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:52.482740  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:50.245254  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:50.746529  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:51.246403  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:51.746576  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:52.245194  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:52.745901  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:53.245791  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:53.745384  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:54.246056  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:54.745809  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:50.936502  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:51.436533  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:51.936298  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:52.436872  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:52.936965  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:53.436624  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:53.936645  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:54.435868  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:54.936019  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:55.436761  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:52.984260  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:53.483436  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:53.983307  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:54.482909  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:54.983837  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:55.482909  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:55.983703  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:56.483097  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:56.984370  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:57.483476  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:55.245416  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:55.745596  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:56.246315  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:56.746972  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:57.246432  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:57.746169  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:58.245899  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:58.745701  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:59.246684  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:59.746013  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:55.936103  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:56.436731  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:56.936130  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:57.436934  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:57.936650  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:58.435890  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:58.936552  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:59.436324  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:59.936567  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:00.436613  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:57.982857  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:58.483173  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:58.984076  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:59.483622  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:09:59.983152  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:00.483700  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:00.983409  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:01.483248  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:01.983111  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:02.483698  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:00.245724  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:00.746426  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:01.245360  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:01.746680  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:02.245174  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:02.746009  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:03.246343  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:03.746019  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:04.245779  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:04.745882  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:00.935947  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:01.437327  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:01.937129  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:02.436468  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:02.936473  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:03.436333  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:03.936134  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:04.436385  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:04.937151  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:05.437232  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:02.983942  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:03.483661  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:03.983172  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:04.483536  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:04.983253  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:05.483439  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:05.982645  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:06.483045  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:06.984031  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:07.483303  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:05.245641  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:05.745823  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:06.245494  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:06.746765  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:07.245879  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:07.745869  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:08.245211  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:08.746263  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:09.246504  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:09.744996  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:05.936844  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:06.436478  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:06.935984  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:07.436742  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:07.935862  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:08.436143  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:08.936623  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:09.437102  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:09.936964  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:10.436154  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:07.983001  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:08.483616  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:08.983409  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:09.483478  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:09.982888  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:10.483505  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:10.983487  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:11.482828  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:11.982887  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:12.483514  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:10.245552  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:10.745120  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:11.246143  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:11.746163  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:12.245633  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:12.745368  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:13.246475  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:13.745271  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:14.245933  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:14.745805  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:10.935671  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:11.436335  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:11.936196  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:12.436273  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:12.936625  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:13.436266  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:13.936782  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:14.436189  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:14.936448  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:15.436442  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:12.983418  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:13.483281  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:13.983117  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:14.483767  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:14.984021  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:15.483731  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:15.983275  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:16.483869  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:16.983375  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:17.482882  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:15.245668  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:15.746147  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:16.246640  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:16.746736  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:17.246420  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:17.745966  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:18.246253  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:18.745906  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:19.246303  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:19.745986  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:15.937381  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:16.436018  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:16.936466  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:17.436852  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:17.936227  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:18.437410  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:18.935713  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:19.436449  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:19.935644  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:20.435982  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:17.983311  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:18.483558  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:18.983528  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:19.483170  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:19.984155  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:20.483754  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:20.983412  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:21.483938  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:21.983465  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:22.483020  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:20.245720  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:20.745374  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:21.246756  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:21.745755  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:22.245418  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:22.746818  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:23.245897  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:23.745485  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:24.245161  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:24.746048  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:20.936177  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:21.437209  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:21.936770  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:22.436342  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:22.936061  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:23.436819  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:23.935988  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:24.436564  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:24.935683  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:25.437297  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:22.983512  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:23.484563  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:23.983146  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:24.483790  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:24.983839  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:25.483026  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:25.983149  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:26.484482  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:26.983378  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:27.482721  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:25.246464  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:25.746065  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:26.246367  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:26.746647  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:27.245786  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:27.746272  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:28.245936  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:28.745748  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:29.245512  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:29.745830  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:25.936845  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:26.436141  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:26.937290  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:27.437316  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:27.936601  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:28.435947  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:28.936694  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:29.436517  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:29.936790  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:30.436457  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:27.983105  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:28.483646  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:28.983252  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:29.483908  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:29.983724  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:30.483864  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:30.983787  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:31.483233  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:31.983574  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:32.482995  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:30.245383  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:30.746128  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:31.246198  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:31.746431  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:32.246100  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:32.746119  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:33.246290  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:33.746036  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:34.245863  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:34.745323  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:30.936983  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:31.437037  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:31.936606  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:32.435930  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:32.936507  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:33.436189  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:33.936455  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:34.435839  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:34.935933  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:35.436995  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:32.984129  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:33.484212  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:33.984303  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:34.483306  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:34.983518  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:35.482738  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:35.982612  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:36.483504  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:36.983434  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:37.482971  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:35.246296  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:35.746349  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:36.247475  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:36.746626  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:37.246070  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:37.746520  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:38.245142  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:38.745887  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:39.245695  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:39.745960  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:35.936318  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:36.435767  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:36.936550  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:37.436719  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:37.935917  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:38.435988  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:38.936787  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:39.436849  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:39.935749  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:40.436170  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:37.983100  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:38.483708  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:38.983679  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:39.483567  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:39.983364  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:40.483546  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:40.983622  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:41.484178  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:41.983532  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:42.482768  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:40.245506  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:40.745743  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:41.246985  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:41.746088  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:42.245673  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:42.746257  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:43.246242  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:43.745638  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:44.246113  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:44.745493  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:40.936613  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:41.436769  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:41.937022  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:42.436509  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:42.936170  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:43.436799  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:43.935953  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:44.436096  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:44.936230  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:45.436315  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:42.983678  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:43.482661  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:43.984210  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:44.482755  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:44.983557  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:45.483535  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:45.982947  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:46.483792  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:46.983100  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:47.484233  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:45.246539  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:45.746528  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:46.246174  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:46.746739  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:47.245697  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:47.745790  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:48.245070  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:48.745400  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:49.246339  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:49.745958  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:45.935928  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:46.436394  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:46.936522  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:47.436247  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:47.936524  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:48.436518  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:48.936708  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:49.437978  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:49.936041  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:50.436496  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:47.982762  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:48.483205  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:48.983515  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:49.483024  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:49.982989  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:50.483821  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:50.983511  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:51.482875  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:51.983288  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:52.483464  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:50.245892  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:50.745342  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:51.246251  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:51.746455  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:52.246114  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:52.745679  568301 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:53.243066  568301 kapi.go:81] temporary error: getting Pods with label selector "app.kubernetes.io/name=kubernetes-dashboard-web" : [client rate limiter Wait returned an error: context deadline exceeded]
	I1219 03:10:53.243101  568301 kapi.go:107] duration metric: took 6m0.001125868s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	W1219 03:10:53.243227  568301 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [waiting for app.kubernetes.io/name=kubernetes-dashboard-web pods: context deadline exceeded]
	I1219 03:10:53.244995  568301 out.go:179] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I1219 03:10:53.246175  568301 addons.go:546] duration metric: took 6m5.940868392s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I1219 03:10:53.246216  568301 start.go:247] waiting for cluster config update ...
	I1219 03:10:53.246230  568301 start.go:256] writing updated cluster config ...
	I1219 03:10:53.246533  568301 ssh_runner.go:195] Run: rm -f paused
	I1219 03:10:53.251613  568301 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:10:53.256756  568301 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qmb9z" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:53.261260  568301 pod_ready.go:94] pod "coredns-66bc5c9577-qmb9z" is "Ready"
	I1219 03:10:53.261285  568301 pod_ready.go:86] duration metric: took 4.502294ms for pod "coredns-66bc5c9577-qmb9z" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:53.263432  568301 pod_ready.go:83] waiting for pod "etcd-embed-certs-536489" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:53.267796  568301 pod_ready.go:94] pod "etcd-embed-certs-536489" is "Ready"
	I1219 03:10:53.267819  568301 pod_ready.go:86] duration metric: took 4.363443ms for pod "etcd-embed-certs-536489" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:53.269959  568301 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-536489" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:53.273954  568301 pod_ready.go:94] pod "kube-apiserver-embed-certs-536489" is "Ready"
	I1219 03:10:53.273978  568301 pod_ready.go:86] duration metric: took 3.994974ms for pod "kube-apiserver-embed-certs-536489" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:53.276324  568301 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-536489" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:53.655995  568301 pod_ready.go:94] pod "kube-controller-manager-embed-certs-536489" is "Ready"
	I1219 03:10:53.656024  568301 pod_ready.go:86] duration metric: took 379.67922ms for pod "kube-controller-manager-embed-certs-536489" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:53.856274  568301 pod_ready.go:83] waiting for pod "kube-proxy-qhlhx" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:54.256232  568301 pod_ready.go:94] pod "kube-proxy-qhlhx" is "Ready"
	I1219 03:10:54.256260  568301 pod_ready.go:86] duration metric: took 399.957557ms for pod "kube-proxy-qhlhx" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:54.456456  568301 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-536489" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:54.856278  568301 pod_ready.go:94] pod "kube-scheduler-embed-certs-536489" is "Ready"
	I1219 03:10:54.856307  568301 pod_ready.go:86] duration metric: took 399.821962ms for pod "kube-scheduler-embed-certs-536489" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:54.856318  568301 pod_ready.go:40] duration metric: took 1.60467121s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:10:54.908914  568301 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 03:10:54.910224  568301 out.go:179] * Done! kubectl is now configured to use "embed-certs-536489" cluster and "default" namespace by default
	I1219 03:10:50.936043  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:51.437199  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:51.937554  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:52.436648  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:52.935325  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:53.437090  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:53.936467  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:54.435747  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:54.937514  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:55.437259  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:52.983483  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:53.483110  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:53.984179  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:54.483441  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:54.983571  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:55.482976  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:55.983723  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:56.483799  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:56.983265  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:57.482795  569947 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:57.980094  569947 kapi.go:107] duration metric: took 6m0.000564024s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	W1219 03:10:57.980271  569947 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [waiting for app.kubernetes.io/name=kubernetes-dashboard-web pods: context deadline exceeded]
	I1219 03:10:57.982221  569947 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1219 03:10:57.983556  569947 addons.go:546] duration metric: took 6m7.330731268s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1219 03:10:57.983643  569947 start.go:247] waiting for cluster config update ...
	I1219 03:10:57.983661  569947 start.go:256] writing updated cluster config ...
	I1219 03:10:57.983965  569947 ssh_runner.go:195] Run: rm -f paused
	I1219 03:10:57.988502  569947 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:10:57.993252  569947 pod_ready.go:83] waiting for pod "coredns-7d764666f9-hm5hz" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:57.997922  569947 pod_ready.go:94] pod "coredns-7d764666f9-hm5hz" is "Ready"
	I1219 03:10:57.997946  569947 pod_ready.go:86] duration metric: took 4.66305ms for pod "coredns-7d764666f9-hm5hz" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:58.000317  569947 pod_ready.go:83] waiting for pod "etcd-no-preload-208281" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:58.004843  569947 pod_ready.go:94] pod "etcd-no-preload-208281" is "Ready"
	I1219 03:10:58.004871  569947 pod_ready.go:86] duration metric: took 4.527165ms for pod "etcd-no-preload-208281" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:58.006889  569947 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-208281" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:58.010814  569947 pod_ready.go:94] pod "kube-apiserver-no-preload-208281" is "Ready"
	I1219 03:10:58.010843  569947 pod_ready.go:86] duration metric: took 3.912426ms for pod "kube-apiserver-no-preload-208281" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:58.012893  569947 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-208281" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:58.394606  569947 pod_ready.go:94] pod "kube-controller-manager-no-preload-208281" is "Ready"
	I1219 03:10:58.394643  569947 pod_ready.go:86] duration metric: took 381.720753ms for pod "kube-controller-manager-no-preload-208281" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:58.594310  569947 pod_ready.go:83] waiting for pod "kube-proxy-xst8w" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:58.994002  569947 pod_ready.go:94] pod "kube-proxy-xst8w" is "Ready"
	I1219 03:10:58.994037  569947 pod_ready.go:86] duration metric: took 399.698104ms for pod "kube-proxy-xst8w" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:59.194965  569947 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-208281" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:59.594191  569947 pod_ready.go:94] pod "kube-scheduler-no-preload-208281" is "Ready"
	I1219 03:10:59.594219  569947 pod_ready.go:86] duration metric: took 399.226469ms for pod "kube-scheduler-no-preload-208281" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:10:59.594230  569947 pod_ready.go:40] duration metric: took 1.605690954s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:10:59.642421  569947 start.go:625] kubectl: 1.35.0, cluster: 1.35.0-rc.1 (minor skew: 0)
	I1219 03:10:59.644674  569947 out.go:179] * Done! kubectl is now configured to use "no-preload-208281" cluster and "default" namespace by default
	I1219 03:10:55.937173  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:56.435825  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:56.936702  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:57.436527  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:57.936442  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:58.436611  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:58.936591  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:59.436321  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:10:59.937837  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:00.436459  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:00.936639  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:01.437141  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:01.936951  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:02.436292  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:02.936804  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:03.437702  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:03.936237  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:04.436721  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:04.936104  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:05.439639  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:05.936149  573699 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:11:06.433765  573699 kapi.go:81] temporary error: getting Pods with label selector "app.kubernetes.io/name=kubernetes-dashboard-web" : [client rate limiter Wait returned an error: context deadline exceeded]
	I1219 03:11:06.433806  573699 kapi.go:107] duration metric: took 6m0.001182154s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	W1219 03:11:06.433932  573699 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [waiting for app.kubernetes.io/name=kubernetes-dashboard-web pods: context deadline exceeded]
	I1219 03:11:06.435864  573699 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1219 03:11:06.437280  573699 addons.go:546] duration metric: took 6m7.672932083s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1219 03:11:06.437331  573699 start.go:247] waiting for cluster config update ...
	I1219 03:11:06.437348  573699 start.go:256] writing updated cluster config ...
	I1219 03:11:06.437666  573699 ssh_runner.go:195] Run: rm -f paused
	I1219 03:11:06.441973  573699 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:11:06.446110  573699 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-86vsf" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:06.450837  573699 pod_ready.go:94] pod "coredns-66bc5c9577-86vsf" is "Ready"
	I1219 03:11:06.450868  573699 pod_ready.go:86] duration metric: took 4.729554ms for pod "coredns-66bc5c9577-86vsf" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:06.453222  573699 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-103644" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:06.457430  573699 pod_ready.go:94] pod "etcd-default-k8s-diff-port-103644" is "Ready"
	I1219 03:11:06.457451  573699 pod_ready.go:86] duration metric: took 4.204892ms for pod "etcd-default-k8s-diff-port-103644" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:06.459510  573699 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-103644" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:06.463733  573699 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-103644" is "Ready"
	I1219 03:11:06.463756  573699 pod_ready.go:86] duration metric: took 4.230488ms for pod "kube-apiserver-default-k8s-diff-port-103644" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:06.465771  573699 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-103644" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:06.846433  573699 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-103644" is "Ready"
	I1219 03:11:06.846461  573699 pod_ready.go:86] duration metric: took 380.664307ms for pod "kube-controller-manager-default-k8s-diff-port-103644" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:07.046474  573699 pod_ready.go:83] waiting for pod "kube-proxy-lgw6f" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:07.446485  573699 pod_ready.go:94] pod "kube-proxy-lgw6f" is "Ready"
	I1219 03:11:07.446515  573699 pod_ready.go:86] duration metric: took 400.010893ms for pod "kube-proxy-lgw6f" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:07.647551  573699 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-103644" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:08.046807  573699 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-103644" is "Ready"
	I1219 03:11:08.046840  573699 pod_ready.go:86] duration metric: took 399.227778ms for pod "kube-scheduler-default-k8s-diff-port-103644" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:11:08.046853  573699 pod_ready.go:40] duration metric: took 1.604833632s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:11:08.095708  573699 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 03:11:08.097778  573699 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-103644" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                       ATTEMPT             POD ID              POD                                              NAMESPACE
	219023786529f       6e38f40d628db       17 minutes ago      Running             storage-provisioner        2                   2d1b5a57c414a       storage-provisioner                              kube-system
	71825ae44f527       a0607af4fcd8a       18 minutes ago      Running             kubernetes-dashboard-api   0                   8cb713fbafe4a       kubernetes-dashboard-api-6fc4d946b9-gd6qk        kubernetes-dashboard
	a7b7fd7bf394e       59f642f485d26       18 minutes ago      Running             kubernetes-dashboard-web   0                   9074df2cb8cd2       kubernetes-dashboard-web-858bd7466-n2wrh         kubernetes-dashboard
	bba7b1d6bc96c       4921d7a6dffa9       18 minutes ago      Running             kindnet-cni                1                   a3585342df01c       kindnet-2hplz                                    kube-system
	b2c2e646ecfba       56cc512116c8f       18 minutes ago      Running             busybox                    1                   04096ed4a349b       busybox                                          default
	863ed2b101014       ead0a4a53df89       18 minutes ago      Running             coredns                    1                   6980262009b2b       coredns-5dd5756b68-l88tx                         kube-system
	27b2e16e5c09e       6e38f40d628db       18 minutes ago      Exited              storage-provisioner        1                   2d1b5a57c414a       storage-provisioner                              kube-system
	cdf87bf1433e7       ea1030da44aa1       18 minutes ago      Running             kube-proxy                 1                   ccf5bdf34f0f4       kube-proxy-666m9                                 kube-system
	ec41efb71d11f       f6f496300a2ae       18 minutes ago      Running             kube-scheduler             1                   3a42d74f17260       kube-scheduler-old-k8s-version-002036            kube-system
	dfe38bb0dfc86       bb5e0dde9054c       18 minutes ago      Running             kube-apiserver             1                   c0672cbc96865       kube-apiserver-old-k8s-version-002036            kube-system
	9b508ac5bcc2f       4be79c38a4bab       18 minutes ago      Running             kube-controller-manager    1                   0ea551da5c7a6       kube-controller-manager-old-k8s-version-002036   kube-system
	b9960e975d031       73deb9a3f7025       18 minutes ago      Running             etcd                       1                   3855b9c551a39       etcd-old-k8s-version-002036                      kube-system
	0910d0e25fb93       56cc512116c8f       19 minutes ago      Exited              busybox                    0                   cbcafd254e45b       busybox                                          default
	f5a6828923844       ead0a4a53df89       19 minutes ago      Exited              coredns                    0                   47f60961f0df9       coredns-5dd5756b68-l88tx                         kube-system
	39c0ef083f8a4       4921d7a6dffa9       19 minutes ago      Exited              kindnet-cni                0                   f1900c4db6e63       kindnet-2hplz                                    kube-system
	8763b9d407817       ea1030da44aa1       19 minutes ago      Exited              kube-proxy                 0                   12bf56a539ab2       kube-proxy-666m9                                 kube-system
	c0f0972814035       73deb9a3f7025       19 minutes ago      Exited              etcd                       0                   8bf3f5e0cb410       etcd-old-k8s-version-002036                      kube-system
	dba3065c9833e       bb5e0dde9054c       19 minutes ago      Exited              kube-apiserver             0                   4996e55587c6d       kube-apiserver-old-k8s-version-002036            kube-system
	4c30550e453e3       f6f496300a2ae       19 minutes ago      Exited              kube-scheduler             0                   fb7c53cd7882d       kube-scheduler-old-k8s-version-002036            kube-system
	42172d3a4d4cb       4be79c38a4bab       19 minutes ago      Exited              kube-controller-manager    0                   3611bff22d3f1       kube-controller-manager-old-k8s-version-002036   kube-system
	
	
	==> containerd <==
	Dec 19 03:23:13 old-k8s-version-002036 containerd[455]: time="2025-12-19T03:23:13.948349266Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod261654f50ea014ec6080d0b3394c8bcf.slice/cri-containerd-b9960e975d03179064f3d3dbc5a7f50353ebbf7f7387cd0e71d6d953f2052787.scope/hugetlb.1GB.events\""
	Dec 19 03:23:13 old-k8s-version-002036 containerd[455]: time="2025-12-19T03:23:13.949071516Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode51ba474_bbbd_4293_b5ef_480a0266f436.slice/cri-containerd-863ed2b101014d0d94e2da07371bcbcbadfc937c4d27e332e7b9c083babe32b7.scope/hugetlb.2MB.events\""
	Dec 19 03:23:13 old-k8s-version-002036 containerd[455]: time="2025-12-19T03:23:13.949160654Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode51ba474_bbbd_4293_b5ef_480a0266f436.slice/cri-containerd-863ed2b101014d0d94e2da07371bcbcbadfc937c4d27e332e7b9c083babe32b7.scope/hugetlb.1GB.events\""
	Dec 19 03:23:23 old-k8s-version-002036 containerd[455]: time="2025-12-19T03:23:23.963524598Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dd007d2_8341_49d4_8b6d_8f799c794abf.slice/cri-containerd-a7b7fd7bf394e74ab791d76919b0a3eeaa8297034b785789903fd48bb69b157a.scope/hugetlb.2MB.events\""
	Dec 19 03:23:23 old-k8s-version-002036 containerd[455]: time="2025-12-19T03:23:23.963662638Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dd007d2_8341_49d4_8b6d_8f799c794abf.slice/cri-containerd-a7b7fd7bf394e74ab791d76919b0a3eeaa8297034b785789903fd48bb69b157a.scope/hugetlb.1GB.events\""
	Dec 19 03:23:23 old-k8s-version-002036 containerd[455]: time="2025-12-19T03:23:23.964572486Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3bb4bc745d243699a94e4bd20f4c0b1d.slice/cri-containerd-9b508ac5bcc2f846c1cdb876db92c8775786a89e673d47b06edc47330b0dd92c.scope/hugetlb.2MB.events\""
	Dec 19 03:23:23 old-k8s-version-002036 containerd[455]: time="2025-12-19T03:23:23.964729108Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3bb4bc745d243699a94e4bd20f4c0b1d.slice/cri-containerd-9b508ac5bcc2f846c1cdb876db92c8775786a89e673d47b06edc47330b0dd92c.scope/hugetlb.1GB.events\""
	Dec 19 03:23:23 old-k8s-version-002036 containerd[455]: time="2025-12-19T03:23:23.965610629Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84de5ff3676b9c1f3bccdf4ad3d42f1e.slice/cri-containerd-dfe38bb0dfc8678344b93cade34ee754a193ec59d80c901088ead56815e08751.scope/hugetlb.2MB.events\""
	Dec 19 03:23:23 old-k8s-version-002036 containerd[455]: time="2025-12-19T03:23:23.965725604Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84de5ff3676b9c1f3bccdf4ad3d42f1e.slice/cri-containerd-dfe38bb0dfc8678344b93cade34ee754a193ec59d80c901088ead56815e08751.scope/hugetlb.1GB.events\""
	Dec 19 03:23:23 old-k8s-version-002036 containerd[455]: time="2025-12-19T03:23:23.966575551Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7074ad061d5bac3fab5e1d923113a3f.slice/cri-containerd-ec41efb71d11f10b0b94642489d3834fdc3d5928e6b0c2b8ffff7125bd7af0b5.scope/hugetlb.2MB.events\""
	Dec 19 03:23:23 old-k8s-version-002036 containerd[455]: time="2025-12-19T03:23:23.966694109Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7074ad061d5bac3fab5e1d923113a3f.slice/cri-containerd-ec41efb71d11f10b0b94642489d3834fdc3d5928e6b0c2b8ffff7125bd7af0b5.scope/hugetlb.1GB.events\""
	Dec 19 03:23:23 old-k8s-version-002036 containerd[455]: time="2025-12-19T03:23:23.967409216Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9de888c7_fb93_4e9d_a535_31b7b29f921f.slice/cri-containerd-b2c2e646ecfbacbe13a3967da43bdd9ad22aa9e8731ab55e1c95e55fa24c45eb.scope/hugetlb.2MB.events\""
	Dec 19 03:23:23 old-k8s-version-002036 containerd[455]: time="2025-12-19T03:23:23.967523881Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9de888c7_fb93_4e9d_a535_31b7b29f921f.slice/cri-containerd-b2c2e646ecfbacbe13a3967da43bdd9ad22aa9e8731ab55e1c95e55fa24c45eb.scope/hugetlb.1GB.events\""
	Dec 19 03:23:23 old-k8s-version-002036 containerd[455]: time="2025-12-19T03:23:23.968222521Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-pod39d24a02_b01d_42e2_91a9_afcbe4369262.slice/cri-containerd-bba7b1d6bc96c331ddb22fa76f1a84d6155438b0895d2c1747dc5fba25b38401.scope/hugetlb.2MB.events\""
	Dec 19 03:23:23 old-k8s-version-002036 containerd[455]: time="2025-12-19T03:23:23.968321635Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-pod39d24a02_b01d_42e2_91a9_afcbe4369262.slice/cri-containerd-bba7b1d6bc96c331ddb22fa76f1a84d6155438b0895d2c1747dc5fba25b38401.scope/hugetlb.1GB.events\""
	Dec 19 03:23:23 old-k8s-version-002036 containerd[455]: time="2025-12-19T03:23:23.969222196Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcdeeabb6_d6bd_4c14_88ae_4a3b2cb95017.slice/cri-containerd-71825ae44f5277e1ab0659c4cf232265a66e3271a0ea4220f8f56d30ed22a8b1.scope/hugetlb.2MB.events\""
	Dec 19 03:23:23 old-k8s-version-002036 containerd[455]: time="2025-12-19T03:23:23.969331028Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcdeeabb6_d6bd_4c14_88ae_4a3b2cb95017.slice/cri-containerd-71825ae44f5277e1ab0659c4cf232265a66e3271a0ea4220f8f56d30ed22a8b1.scope/hugetlb.1GB.events\""
	Dec 19 03:23:23 old-k8s-version-002036 containerd[455]: time="2025-12-19T03:23:23.970262182Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5b372da_a545_4eb0_a787_a765babe3092.slice/cri-containerd-219023786529f0d2b2e8db1c37d04dd25946c1f17c1199c8669d4d942666f005.scope/hugetlb.2MB.events\""
	Dec 19 03:23:23 old-k8s-version-002036 containerd[455]: time="2025-12-19T03:23:23.970390968Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5b372da_a545_4eb0_a787_a765babe3092.slice/cri-containerd-219023786529f0d2b2e8db1c37d04dd25946c1f17c1199c8669d4d942666f005.scope/hugetlb.1GB.events\""
	Dec 19 03:23:23 old-k8s-version-002036 containerd[455]: time="2025-12-19T03:23:23.971336490Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod261654f50ea014ec6080d0b3394c8bcf.slice/cri-containerd-b9960e975d03179064f3d3dbc5a7f50353ebbf7f7387cd0e71d6d953f2052787.scope/hugetlb.2MB.events\""
	Dec 19 03:23:23 old-k8s-version-002036 containerd[455]: time="2025-12-19T03:23:23.971445270Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod261654f50ea014ec6080d0b3394c8bcf.slice/cri-containerd-b9960e975d03179064f3d3dbc5a7f50353ebbf7f7387cd0e71d6d953f2052787.scope/hugetlb.1GB.events\""
	Dec 19 03:23:23 old-k8s-version-002036 containerd[455]: time="2025-12-19T03:23:23.972176147Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode51ba474_bbbd_4293_b5ef_480a0266f436.slice/cri-containerd-863ed2b101014d0d94e2da07371bcbcbadfc937c4d27e332e7b9c083babe32b7.scope/hugetlb.2MB.events\""
	Dec 19 03:23:23 old-k8s-version-002036 containerd[455]: time="2025-12-19T03:23:23.972268915Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode51ba474_bbbd_4293_b5ef_480a0266f436.slice/cri-containerd-863ed2b101014d0d94e2da07371bcbcbadfc937c4d27e332e7b9c083babe32b7.scope/hugetlb.1GB.events\""
	Dec 19 03:23:23 old-k8s-version-002036 containerd[455]: time="2025-12-19T03:23:23.972919432Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98b59ee1_673b_4dbe_bc2c_d2ff2e3a620c.slice/cri-containerd-cdf87bf1433e7c2e0dae2c3a75335eb849fc8e2aa686dccdc9a6dbcf45ed6f7b.scope/hugetlb.2MB.events\""
	Dec 19 03:23:23 old-k8s-version-002036 containerd[455]: time="2025-12-19T03:23:23.973006318Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98b59ee1_673b_4dbe_bc2c_d2ff2e3a620c.slice/cri-containerd-cdf87bf1433e7c2e0dae2c3a75335eb849fc8e2aa686dccdc9a6dbcf45ed6f7b.scope/hugetlb.1GB.events\""
	
	
	==> coredns [863ed2b101014d0d94e2da07371bcbcbadfc937c4d27e332e7b9c083babe32b7] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:41296 - 26157 "HINFO IN 1571955820553979720.298796393754182401. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.051572035s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [f5a6828923844354aa75ffc6f9543fa3041f3bf3b66c134daad8384cab76bf5e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:40471 - 45478 "HINFO IN 4731433819679745007.4400201330808537038. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.028889842s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-002036
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-002036
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=old-k8s-version-002036
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T03_03_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 03:03:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-002036
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 03:23:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 03:20:33 +0000   Fri, 19 Dec 2025 03:03:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 03:20:33 +0000   Fri, 19 Dec 2025 03:03:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 03:20:33 +0000   Fri, 19 Dec 2025 03:03:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 03:20:33 +0000   Fri, 19 Dec 2025 03:04:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-002036
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                09e77726-e089-467e-8671-1211ba943cda
	  Boot ID:                    a0dec9bb-d63c-4dc5-9036-bbcaf9f2c6be
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.2.0
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-5dd5756b68-l88tx                                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     19m
	  kube-system                 etcd-old-k8s-version-002036                              100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         19m
	  kube-system                 kindnet-2hplz                                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-apiserver-old-k8s-version-002036                    250m (3%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-old-k8s-version-002036           200m (2%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-666m9                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-old-k8s-version-002036                    100m (1%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 metrics-server-57f55c9bc5-jjqwh                          100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         19m
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kubernetes-dashboard        kubernetes-dashboard-api-6fc4d946b9-gd6qk                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	  kubernetes-dashboard        kubernetes-dashboard-auth-745d5d46bb-rkfcv               100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	  kubernetes-dashboard        kubernetes-dashboard-kong-f487b85cd-9xprh                0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-6b5c7dc479-4krct    100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	  kubernetes-dashboard        kubernetes-dashboard-web-858bd7466-n2wrh                 100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (16%)  1100m (13%)
	  memory             1220Mi (3%)  1820Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 18m                kube-proxy       
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x3 over 19m)  kubelet          Node old-k8s-version-002036 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x2 over 19m)  kubelet          Node old-k8s-version-002036 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x2 over 19m)  kubelet          Node old-k8s-version-002036 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    19m                kubelet          Node old-k8s-version-002036 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m                kubelet          Node old-k8s-version-002036 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     19m                kubelet          Node old-k8s-version-002036 status is now: NodeHasSufficientPID
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           19m                node-controller  Node old-k8s-version-002036 event: Registered Node old-k8s-version-002036 in Controller
	  Normal  NodeReady                19m                kubelet          Node old-k8s-version-002036 status is now: NodeReady
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x9 over 18m)  kubelet          Node old-k8s-version-002036 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x7 over 18m)  kubelet          Node old-k8s-version-002036 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node old-k8s-version-002036 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18m                node-controller  Node old-k8s-version-002036 event: Registered Node old-k8s-version-002036 in Controller
	
	
	==> dmesg <==
	[Dec19 01:17] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001886] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.395482] i8042: Warning: Keylock active
	[  +0.012710] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497460] block sda: the capability attribute has been deprecated.
	[  +0.080392] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.020963] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.499240] kauditd_printk_skb: 47 callbacks suppressed
	[Dec19 03:03] overlayfs: failed to resolve '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/29/fs': -2
	
	
	==> etcd [b9960e975d03179064f3d3dbc5a7f50353ebbf7f7387cd0e71d6d953f2052787] <==
	{"level":"info","ts":"2025-12-19T03:04:41.733296Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-19T03:04:41.733393Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-19T03:04:41.733428Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-19T03:04:41.733572Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-19T03:04:41.73396Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-19T03:04:43.62157Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-19T03:04:43.621631Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-19T03:04:43.621673Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-19T03:04:43.621686Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-12-19T03:04:43.621696Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-19T03:04:43.621704Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-12-19T03:04:43.621714Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-19T03:04:43.623118Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-002036 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-19T03:04:43.623141Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-19T03:04:43.623183Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-19T03:04:43.62338Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-19T03:04:43.623407Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-19T03:04:43.625639Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-19T03:04:43.625642Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-12-19T03:14:43.6452Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1097}
	{"level":"info","ts":"2025-12-19T03:14:43.666734Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1097,"took":"21.16199ms","hash":1109939069}
	{"level":"info","ts":"2025-12-19T03:14:43.666796Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1109939069,"revision":1097,"compact-revision":-1}
	{"level":"info","ts":"2025-12-19T03:19:43.650369Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1349}
	{"level":"info","ts":"2025-12-19T03:19:43.651564Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1349,"took":"878.747µs","hash":2791061206}
	{"level":"info","ts":"2025-12-19T03:19:43.651633Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2791061206,"revision":1349,"compact-revision":1097}
	
	
	==> etcd [c0f0972814035ffda82727b1fcf47abe8c12064548f484c2c6c7ece1325d5770] <==
	{"level":"info","ts":"2025-12-19T03:03:36.230875Z","caller":"traceutil/trace.go:171","msg":"trace[1916280320] transaction","detail":"{read_only:false; response_revision:24; number_of_response:1; }","duration":"113.321236ms","start":"2025-12-19T03:03:36.117546Z","end":"2025-12-19T03:03:36.230867Z","steps":["trace[1916280320] 'process raft request'  (duration: 112.606237ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:03:36.230943Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.934128ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/extension-apiserver-authentication\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-12-19T03:03:36.23097Z","caller":"traceutil/trace.go:171","msg":"trace[472830767] range","detail":"{range_begin:/registry/configmaps/kube-system/extension-apiserver-authentication; range_end:; response_count:0; response_revision:28; }","duration":"112.960962ms","start":"2025-12-19T03:03:36.117999Z","end":"2025-12-19T03:03:36.23096Z","steps":["trace[472830767] 'agreement among raft nodes before linearized reading'  (duration: 112.918738ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:03:36.399146Z","caller":"traceutil/trace.go:171","msg":"trace[1612208426] transaction","detail":"{read_only:false; response_revision:30; number_of_response:1; }","duration":"162.707672ms","start":"2025-12-19T03:03:36.236391Z","end":"2025-12-19T03:03:36.399099Z","steps":["trace[1612208426] 'process raft request'  (duration: 112.442821ms)","trace[1612208426] 'compare'  (duration: 50.023455ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-19T03:03:36.399196Z","caller":"traceutil/trace.go:171","msg":"trace[58388472] transaction","detail":"{read_only:false; response_revision:36; number_of_response:1; }","duration":"161.685609ms","start":"2025-12-19T03:03:36.237504Z","end":"2025-12-19T03:03:36.399189Z","steps":["trace[58388472] 'process raft request'  (duration: 161.593823ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:03:36.399193Z","caller":"traceutil/trace.go:171","msg":"trace[1619775753] transaction","detail":"{read_only:false; response_revision:35; number_of_response:1; }","duration":"161.656388ms","start":"2025-12-19T03:03:36.237507Z","end":"2025-12-19T03:03:36.399164Z","steps":["trace[1619775753] 'process raft request'  (duration: 161.573032ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:03:36.399151Z","caller":"traceutil/trace.go:171","msg":"trace[912657803] transaction","detail":"{read_only:false; response_revision:37; number_of_response:1; }","duration":"160.974586ms","start":"2025-12-19T03:03:36.238163Z","end":"2025-12-19T03:03:36.399137Z","steps":["trace[912657803] 'process raft request'  (duration: 160.952651ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:03:36.399232Z","caller":"traceutil/trace.go:171","msg":"trace[493132379] transaction","detail":"{read_only:false; response_revision:31; number_of_response:1; }","duration":"162.045713ms","start":"2025-12-19T03:03:36.237166Z","end":"2025-12-19T03:03:36.399212Z","steps":["trace[493132379] 'process raft request'  (duration: 161.814271ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:03:36.399237Z","caller":"traceutil/trace.go:171","msg":"trace[402602698] transaction","detail":"{read_only:false; response_revision:33; number_of_response:1; }","duration":"161.746686ms","start":"2025-12-19T03:03:36.237451Z","end":"2025-12-19T03:03:36.399197Z","steps":["trace[402602698] 'process raft request'  (duration: 161.592106ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:03:36.399302Z","caller":"traceutil/trace.go:171","msg":"trace[1236662667] transaction","detail":"{read_only:false; response_revision:34; number_of_response:1; }","duration":"161.820279ms","start":"2025-12-19T03:03:36.23746Z","end":"2025-12-19T03:03:36.399281Z","steps":["trace[1236662667] 'process raft request'  (duration: 161.60246ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:03:36.399301Z","caller":"traceutil/trace.go:171","msg":"trace[91038073] transaction","detail":"{read_only:false; response_revision:32; number_of_response:1; }","duration":"161.879529ms","start":"2025-12-19T03:03:36.237412Z","end":"2025-12-19T03:03:36.399291Z","steps":["trace[91038073] 'process raft request'  (duration: 161.606202ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:03:36.880457Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"218.765633ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790777148805294 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/flowschemas/probes\" mod_revision:0 > success:<request_put:<key:\"/registry/flowschemas/probes\" value_size:596 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-12-19T03:03:36.880718Z","caller":"traceutil/trace.go:171","msg":"trace[514249475] linearizableReadLoop","detail":"{readStateIndex:50; appliedIndex:48; }","duration":"296.169834ms","start":"2025-12-19T03:03:36.584536Z","end":"2025-12-19T03:03:36.880705Z","steps":["trace[514249475] 'read index received'  (duration: 76.710993ms)","trace[514249475] 'applied index is now lower than readState.Index'  (duration: 219.457993ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-19T03:03:36.880717Z","caller":"traceutil/trace.go:171","msg":"trace[2140422772] transaction","detail":"{read_only:false; response_revision:45; number_of_response:1; }","duration":"428.202523ms","start":"2025-12-19T03:03:36.452492Z","end":"2025-12-19T03:03:36.880694Z","steps":["trace[2140422772] 'process raft request'  (duration: 208.783793ms)","trace[2140422772] 'compare'  (duration: 218.644109ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-19T03:03:36.880764Z","caller":"traceutil/trace.go:171","msg":"trace[2146429374] transaction","detail":"{read_only:false; response_revision:46; number_of_response:1; }","duration":"423.797756ms","start":"2025-12-19T03:03:36.45696Z","end":"2025-12-19T03:03:36.880758Z","steps":["trace[2146429374] 'process raft request'  (duration: 423.60052ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:03:36.880803Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-19T03:03:36.452473Z","time spent":"428.296663ms","remote":"127.0.0.1:55772","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":632,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/flowschemas/probes\" mod_revision:0 > success:<request_put:<key:\"/registry/flowschemas/probes\" value_size:596 >> failure:<>"}
	{"level":"warn","ts":"2025-12-19T03:03:36.880906Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"296.358394ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/kube-system/\" range_end:\"/registry/limitranges/kube-system0\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-12-19T03:03:36.880958Z","caller":"traceutil/trace.go:171","msg":"trace[1343287706] range","detail":"{range_begin:/registry/limitranges/kube-system/; range_end:/registry/limitranges/kube-system0; response_count:0; response_revision:46; }","duration":"296.429709ms","start":"2025-12-19T03:03:36.584515Z","end":"2025-12-19T03:03:36.880945Z","steps":["trace[1343287706] 'agreement among raft nodes before linearized reading'  (duration: 296.28034ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:03:36.881032Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-19T03:03:36.456934Z","time spent":"423.856067ms","remote":"127.0.0.1:55772","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1120,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/flowschemas/system-node-high\" mod_revision:43 > success:<request_put:<key:\"/registry/flowschemas/system-node-high\" value_size:1074 >> failure:<request_range:<key:\"/registry/flowschemas/system-node-high\" > >"}
	{"level":"warn","ts":"2025-12-19T03:03:37.230147Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.376525ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790777148805359 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/system:heapster\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:heapster\" value_size:579 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-12-19T03:03:37.230243Z","caller":"traceutil/trace.go:171","msg":"trace[1162357764] linearizableReadLoop","detail":"{readStateIndex:88; appliedIndex:87; }","duration":"135.883415ms","start":"2025-12-19T03:03:37.094347Z","end":"2025-12-19T03:03:37.23023Z","steps":["trace[1162357764] 'read index received'  (duration: 28.791µs)","trace[1162357764] 'applied index is now lower than readState.Index'  (duration: 135.853656ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-19T03:03:37.230333Z","caller":"traceutil/trace.go:171","msg":"trace[1991665803] transaction","detail":"{read_only:false; response_revision:84; number_of_response:1; }","duration":"209.39897ms","start":"2025-12-19T03:03:37.020904Z","end":"2025-12-19T03:03:37.230303Z","steps":["trace[1991665803] 'process raft request'  (duration: 64.803363ms)","trace[1991665803] 'compare'  (duration: 144.248743ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-19T03:03:37.230375Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.063964ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-12-19T03:03:37.230403Z","caller":"traceutil/trace.go:171","msg":"trace[916164666] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:84; }","duration":"136.098902ms","start":"2025-12-19T03:03:37.094295Z","end":"2025-12-19T03:03:37.230394Z","steps":["trace[916164666] 'agreement among raft nodes before linearized reading'  (duration: 135.974814ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:03:37.420394Z","caller":"traceutil/trace.go:171","msg":"trace[973871162] transaction","detail":"{read_only:false; response_revision:86; number_of_response:1; }","duration":"121.376207ms","start":"2025-12-19T03:03:37.298996Z","end":"2025-12-19T03:03:37.420372Z","steps":["trace[973871162] 'process raft request'  (duration: 49.341965ms)","trace[973871162] 'compare'  (duration: 71.905634ms)"],"step_count":2}
	
	
	==> kernel <==
	 03:23:28 up  2:05,  0 user,  load average: 0.22, 0.48, 3.15
	Linux old-k8s-version-002036 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [39c0ef083f8a43337d9a7d982f0384e5b9fb7dfc8d1288356f134d0cbd5f67dc] <==
	I1219 03:03:57.159084       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1219 03:03:57.159387       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1219 03:03:57.159533       1 main.go:148] setting mtu 1500 for CNI 
	I1219 03:03:57.159559       1 main.go:178] kindnetd IP family: "ipv4"
	I1219 03:03:57.254823       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-19T03:03:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1219 03:03:57.459937       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1219 03:03:57.459962       1 controller.go:381] "Waiting for informer caches to sync"
	I1219 03:03:57.459975       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1219 03:03:57.555262       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1219 03:03:57.760161       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1219 03:03:57.760224       1 metrics.go:72] Registering metrics
	I1219 03:03:57.760279       1 controller.go:711] "Syncing nftables rules"
	I1219 03:04:07.462743       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:04:07.462833       1 main.go:301] handling current node
	I1219 03:04:17.460692       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:04:17.460739       1 main.go:301] handling current node
	
	
	==> kindnet [bba7b1d6bc96c331ddb22fa76f1a84d6155438b0895d2c1747dc5fba25b38401] <==
	I1219 03:21:26.412743       1 main.go:301] handling current node
	I1219 03:21:36.410864       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:21:36.410908       1 main.go:301] handling current node
	I1219 03:21:46.410871       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:21:46.410920       1 main.go:301] handling current node
	I1219 03:21:56.412364       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:21:56.412430       1 main.go:301] handling current node
	I1219 03:22:06.411088       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:22:06.411142       1 main.go:301] handling current node
	I1219 03:22:16.411649       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:22:16.411695       1 main.go:301] handling current node
	I1219 03:22:26.410563       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:22:26.410609       1 main.go:301] handling current node
	I1219 03:22:36.418647       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:22:36.418678       1 main.go:301] handling current node
	I1219 03:22:46.410689       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:22:46.410745       1 main.go:301] handling current node
	I1219 03:22:56.415491       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:22:56.415525       1 main.go:301] handling current node
	I1219 03:23:06.418477       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:23:06.418523       1 main.go:301] handling current node
	I1219 03:23:16.411999       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:23:16.412046       1 main.go:301] handling current node
	I1219 03:23:26.411372       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:23:26.411440       1 main.go:301] handling current node
	
	
	==> kube-apiserver [dba3065c9833e9f4afe46526bd689bf64734d83bf5f365ffb830dec4dcecc528] <==
	I1219 03:03:52.565246       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1219 03:03:52.565297       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1219 03:03:52.616476       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	W1219 03:04:22.106806       1 handler_proxy.go:93] no RequestInfo found in the context
	E1219 03:04:22.106892       1 controller.go:135] adding "v1beta1.metrics.k8s.io" to AggregationController failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1219 03:04:22.107256       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 service unavailable
	I1219 03:04:22.107280       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1219 03:04:22.112388       1 handler_proxy.go:93] no RequestInfo found in the context
	E1219 03:04:22.112459       1 controller.go:143] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E1219 03:04:22.112516       1 handler_proxy.go:137] error resolving kube-system/metrics-server: service "metrics-server" not found
	I1219 03:04:22.112547       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 service unavailable
	I1219 03:04:22.112559       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1219 03:04:22.193411       1 alloc.go:330] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.105.1.59"}
	W1219 03:04:22.204323       1 handler_proxy.go:93] no RequestInfo found in the context
	E1219 03:04:22.204399       1 controller.go:143] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1219 03:04:22.204826       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.105.1.59:443: connect: connection refused
	I1219 03:04:22.204852       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1219 03:04:22.209834       1 handler_proxy.go:93] no RequestInfo found in the context
	E1219 03:04:22.209905       1 controller.go:143] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1219 03:04:22.210310       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.105.1.59:443: connect: connection refused
	I1219 03:04:22.210330       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-apiserver [dfe38bb0dfc8678344b93cade34ee754a193ec59d80c901088ead56815e08751] <==
	E1219 03:19:45.672156       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1219 03:19:45.672163       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1219 03:19:45.672226       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1219 03:19:45.673353       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1219 03:20:44.598540       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.105.1.59:443: connect: connection refused
	I1219 03:20:44.598560       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1219 03:20:45.672812       1 handler_proxy.go:93] no RequestInfo found in the context
	E1219 03:20:45.672854       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1219 03:20:45.672861       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:20:45.673922       1 handler_proxy.go:93] no RequestInfo found in the context
	E1219 03:20:45.674015       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1219 03:20:45.674027       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1219 03:21:44.598486       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.105.1.59:443: connect: connection refused
	I1219 03:21:44.598510       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1219 03:22:44.598308       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.105.1.59:443: connect: connection refused
	I1219 03:22:44.598330       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1219 03:22:45.673844       1 handler_proxy.go:93] no RequestInfo found in the context
	E1219 03:22:45.673879       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1219 03:22:45.673886       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:22:45.674924       1 handler_proxy.go:93] no RequestInfo found in the context
	E1219 03:22:45.675004       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1219 03:22:45.675019       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [42172d3a4d4cb7aacee226a539d40edb403b90518feb569cc0854014a8a2daf3] <==
	I1219 03:03:52.620521       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1219 03:03:52.821801       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-szgdv"
	I1219 03:03:52.830654       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-l88tx"
	I1219 03:03:52.841643       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="221.202983ms"
	I1219 03:03:52.859450       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="17.736845ms"
	I1219 03:03:52.860040       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="115.174µs"
	I1219 03:03:53.591494       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1219 03:03:53.602218       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-szgdv"
	I1219 03:03:53.611109       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="20.006711ms"
	I1219 03:03:53.627903       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="16.69768ms"
	I1219 03:03:53.628040       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="91.405µs"
	I1219 03:03:53.628172       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="69.457µs"
	I1219 03:04:07.521462       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="98.47µs"
	I1219 03:04:07.542469       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="121.558µs"
	I1219 03:04:08.849563       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="128.629µs"
	I1219 03:04:08.879779       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.360678ms"
	I1219 03:04:08.879910       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="80.84µs"
	I1219 03:04:11.777875       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1219 03:04:22.126978       1 event.go:307] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-57f55c9bc5 to 1"
	I1219 03:04:22.134658       1 event.go:307] "Event occurred" object="kube-system/metrics-server-57f55c9bc5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-57f55c9bc5-jjqwh"
	I1219 03:04:22.140990       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="14.273739ms"
	I1219 03:04:22.151649       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="10.595163ms"
	I1219 03:04:22.151800       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="77.832µs"
	I1219 03:04:22.156676       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="110.922µs"
	I1219 03:04:22.362818       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-controller-manager [9b508ac5bcc2f846c1cdb876db92c8775786a89e673d47b06edc47330b0dd92c] <==
	I1219 03:19:27.886220       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1219 03:19:57.485976       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 03:19:57.895950       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1219 03:20:27.490292       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 03:20:27.904156       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1219 03:20:57.495940       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 03:20:57.911314       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1219 03:21:24.009717       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="185.092µs"
	E1219 03:21:27.500507       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 03:21:27.919128       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1219 03:21:35.009324       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="117.38µs"
	I1219 03:21:43.010296       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-6b5c7dc479" duration="134.611µs"
	I1219 03:21:46.010512       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-auth-745d5d46bb" duration="139.803µs"
	I1219 03:21:50.015111       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd" duration="183.868µs"
	E1219 03:21:57.505462       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 03:21:57.926806       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1219 03:21:58.010791       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-6b5c7dc479" duration="109.818µs"
	I1219 03:22:00.010755       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-auth-745d5d46bb" duration="126.398µs"
	I1219 03:22:01.018496       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd" duration="95.996µs"
	E1219 03:22:27.510148       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 03:22:27.934730       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1219 03:22:57.515207       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 03:22:57.942708       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1219 03:23:27.520041       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 03:23:27.951337       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [8763b9d407817a6fdeefb4eca9030abd4e87157819227695d43c3f1e30e5db56] <==
	I1219 03:03:53.465320       1 server_others.go:69] "Using iptables proxy"
	I1219 03:03:53.480477       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1219 03:03:53.531303       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1219 03:03:53.536677       1 server_others.go:152] "Using iptables Proxier"
	I1219 03:03:53.536723       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1219 03:03:53.536731       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1219 03:03:53.536771       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1219 03:03:53.537131       1 server.go:846] "Version info" version="v1.28.0"
	I1219 03:03:53.537460       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:03:53.538383       1 config.go:188] "Starting service config controller"
	I1219 03:03:53.538469       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1219 03:03:53.538387       1 config.go:97] "Starting endpoint slice config controller"
	I1219 03:03:53.538626       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1219 03:03:53.539146       1 config.go:315] "Starting node config controller"
	I1219 03:03:53.539160       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1219 03:03:53.639550       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1219 03:03:53.639522       1 shared_informer.go:318] Caches are synced for node config
	I1219 03:03:53.639640       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-proxy [cdf87bf1433e7c2e0dae2c3a75335eb849fc8e2aa686dccdc9a6dbcf45ed6f7b] <==
	I1219 03:04:45.727676       1 server_others.go:69] "Using iptables proxy"
	I1219 03:04:45.738908       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1219 03:04:45.786634       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1219 03:04:45.793346       1 server_others.go:152] "Using iptables Proxier"
	I1219 03:04:45.793394       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1219 03:04:45.793400       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1219 03:04:45.793431       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1219 03:04:45.793752       1 server.go:846] "Version info" version="v1.28.0"
	I1219 03:04:45.793825       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:04:45.794528       1 config.go:315] "Starting node config controller"
	I1219 03:04:45.794616       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1219 03:04:45.794895       1 config.go:97] "Starting endpoint slice config controller"
	I1219 03:04:45.794998       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1219 03:04:45.794953       1 config.go:188] "Starting service config controller"
	I1219 03:04:45.795052       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1219 03:04:45.895061       1 shared_informer.go:318] Caches are synced for node config
	I1219 03:04:45.895083       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1219 03:04:45.895109       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [4c30550e453e3bd638fc05cb1fab0a37a8d5dca882572ea1f9f7632acfd2724f] <==
	W1219 03:03:36.966145       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1219 03:03:36.966321       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1219 03:03:36.973470       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1219 03:03:36.973531       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1219 03:03:37.108846       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1219 03:03:37.108884       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1219 03:03:37.148690       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1219 03:03:37.148743       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1219 03:03:37.180898       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1219 03:03:37.180946       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1219 03:03:37.227624       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1219 03:03:37.227665       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1219 03:03:37.250310       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1219 03:03:37.250342       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1219 03:03:37.263181       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1219 03:03:37.263215       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1219 03:03:37.274139       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1219 03:03:37.274174       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1219 03:03:37.281849       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1219 03:03:37.281886       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1219 03:03:37.368720       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1219 03:03:37.368754       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1219 03:03:37.574481       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1219 03:03:37.574555       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I1219 03:03:40.469522       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [ec41efb71d11f10b0b94642489d3834fdc3d5928e6b0c2b8ffff7125bd7af0b5] <==
	I1219 03:04:42.529910       1 serving.go:348] Generated self-signed cert in-memory
	W1219 03:04:44.641558       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1219 03:04:44.641637       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1219 03:04:44.641657       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1219 03:04:44.641667       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1219 03:04:44.669727       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1219 03:04:44.669792       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:04:44.672755       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:04:44.672806       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1219 03:04:44.673809       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1219 03:04:44.673982       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1219 03:04:44.774012       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 19 03:22:01 old-k8s-version-002036 kubelet[593]: E1219 03:22:01.003105     593 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jjqwh" podUID="63e47cb7-d727-4ce2-89f3-e22c05efecc0"
	Dec 19 03:22:11 old-k8s-version-002036 kubelet[593]: E1219 03:22:11.999486     593 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"clear-stale-pid\" with ImagePullBackOff: \"Back-off pulling image \\\"kong:3.9\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd-9xprh" podUID="c0cf4d53-8fa6-470a-86ca-0c401ffda271"
	Dec 19 03:22:11 old-k8s-version-002036 kubelet[593]: E1219 03:22:11.999495     593 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-auth\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-auth:1.4.0\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-auth-745d5d46bb-rkfcv" podUID="e9c94a56-a570-4cac-9bce-35194c8d5146"
	Dec 19 03:22:13 old-k8s-version-002036 kubelet[593]: E1219 03:22:13.000021     593 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-6b5c7dc479-4krct" podUID="f77a9e0e-0fd2-4ced-bf3d-fb72882b3980"
	Dec 19 03:22:13 old-k8s-version-002036 kubelet[593]: E1219 03:22:13.000078     593 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jjqwh" podUID="63e47cb7-d727-4ce2-89f3-e22c05efecc0"
	Dec 19 03:22:23 old-k8s-version-002036 kubelet[593]: E1219 03:22:23.999069     593 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-6b5c7dc479-4krct" podUID="f77a9e0e-0fd2-4ced-bf3d-fb72882b3980"
	Dec 19 03:22:25 old-k8s-version-002036 kubelet[593]: E1219 03:22:25.000097     593 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jjqwh" podUID="63e47cb7-d727-4ce2-89f3-e22c05efecc0"
	Dec 19 03:22:25 old-k8s-version-002036 kubelet[593]: E1219 03:22:25.999615     593 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-auth\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-auth:1.4.0\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-auth-745d5d46bb-rkfcv" podUID="e9c94a56-a570-4cac-9bce-35194c8d5146"
	Dec 19 03:22:25 old-k8s-version-002036 kubelet[593]: E1219 03:22:25.999681     593 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"clear-stale-pid\" with ImagePullBackOff: \"Back-off pulling image \\\"kong:3.9\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd-9xprh" podUID="c0cf4d53-8fa6-470a-86ca-0c401ffda271"
	Dec 19 03:22:38 old-k8s-version-002036 kubelet[593]: E1219 03:22:38.999891     593 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-6b5c7dc479-4krct" podUID="f77a9e0e-0fd2-4ced-bf3d-fb72882b3980"
	Dec 19 03:22:39 old-k8s-version-002036 kubelet[593]: E1219 03:22:39.999281     593 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jjqwh" podUID="63e47cb7-d727-4ce2-89f3-e22c05efecc0"
	Dec 19 03:22:39 old-k8s-version-002036 kubelet[593]: E1219 03:22:39.999344     593 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"clear-stale-pid\" with ImagePullBackOff: \"Back-off pulling image \\\"kong:3.9\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd-9xprh" podUID="c0cf4d53-8fa6-470a-86ca-0c401ffda271"
	Dec 19 03:22:40 old-k8s-version-002036 kubelet[593]: E1219 03:22:40.999757     593 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-auth\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-auth:1.4.0\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-auth-745d5d46bb-rkfcv" podUID="e9c94a56-a570-4cac-9bce-35194c8d5146"
	Dec 19 03:22:51 old-k8s-version-002036 kubelet[593]: E1219 03:22:51.999895     593 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jjqwh" podUID="63e47cb7-d727-4ce2-89f3-e22c05efecc0"
	Dec 19 03:22:51 old-k8s-version-002036 kubelet[593]: E1219 03:22:51.999948     593 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-6b5c7dc479-4krct" podUID="f77a9e0e-0fd2-4ced-bf3d-fb72882b3980"
	Dec 19 03:22:54 old-k8s-version-002036 kubelet[593]: E1219 03:22:54.999461     593 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"clear-stale-pid\" with ImagePullBackOff: \"Back-off pulling image \\\"kong:3.9\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd-9xprh" podUID="c0cf4d53-8fa6-470a-86ca-0c401ffda271"
	Dec 19 03:22:55 old-k8s-version-002036 kubelet[593]: E1219 03:22:55.999302     593 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-auth\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-auth:1.4.0\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-auth-745d5d46bb-rkfcv" podUID="e9c94a56-a570-4cac-9bce-35194c8d5146"
	Dec 19 03:23:04 old-k8s-version-002036 kubelet[593]: E1219 03:23:04.999596     593 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jjqwh" podUID="63e47cb7-d727-4ce2-89f3-e22c05efecc0"
	Dec 19 03:23:05 old-k8s-version-002036 kubelet[593]: E1219 03:23:05.999917     593 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-6b5c7dc479-4krct" podUID="f77a9e0e-0fd2-4ced-bf3d-fb72882b3980"
	Dec 19 03:23:06 old-k8s-version-002036 kubelet[593]: E1219 03:23:06.999830     593 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-auth\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-auth:1.4.0\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-auth-745d5d46bb-rkfcv" podUID="e9c94a56-a570-4cac-9bce-35194c8d5146"
	Dec 19 03:23:08 old-k8s-version-002036 kubelet[593]: E1219 03:23:08.000079     593 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"clear-stale-pid\" with ImagePullBackOff: \"Back-off pulling image \\\"kong:3.9\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd-9xprh" podUID="c0cf4d53-8fa6-470a-86ca-0c401ffda271"
	Dec 19 03:23:16 old-k8s-version-002036 kubelet[593]: E1219 03:23:16.999107     593 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-6b5c7dc479-4krct" podUID="f77a9e0e-0fd2-4ced-bf3d-fb72882b3980"
	Dec 19 03:23:16 old-k8s-version-002036 kubelet[593]: E1219 03:23:16.999180     593 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jjqwh" podUID="63e47cb7-d727-4ce2-89f3-e22c05efecc0"
	Dec 19 03:23:20 old-k8s-version-002036 kubelet[593]: E1219 03:23:20.999354     593 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-auth\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-auth:1.4.0\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-auth-745d5d46bb-rkfcv" podUID="e9c94a56-a570-4cac-9bce-35194c8d5146"
	Dec 19 03:23:20 old-k8s-version-002036 kubelet[593]: E1219 03:23:20.999367     593 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"clear-stale-pid\" with ImagePullBackOff: \"Back-off pulling image \\\"kong:3.9\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd-9xprh" podUID="c0cf4d53-8fa6-470a-86ca-0c401ffda271"
	
	
	==> kubernetes-dashboard [71825ae44f5277e1ab0659c4cf232265a66e3271a0ea4220f8f56d30ed22a8b1] <==
	E1219 03:11:07.121051       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:11:37.124456       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:12:07.127703       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:12:37.130492       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:13:07.134259       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:13:37.138135       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:14:07.142148       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:14:37.145316       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:15:07.152765       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:15:37.155524       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:16:07.159015       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:16:37.162529       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:17:07.166146       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:17:37.168535       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:18:07.171574       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:18:37.174896       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:19:07.178498       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:19:37.181871       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:20:07.185685       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:20:37.188691       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:21:07.192128       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:21:37.195844       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:22:07.199670       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:22:37.202175       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:23:07.205674       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> kubernetes-dashboard [a7b7fd7bf394e74ab791d76919b0a3eeaa8297034b785789903fd48bb69b157a] <==
	I1219 03:05:03.763352       1 main.go:37] "Starting Kubernetes Dashboard Web" version="1.7.0"
	I1219 03:05:03.763415       1 init.go:48] Using in-cluster config
	I1219 03:05:03.763693       1 main.go:57] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> storage-provisioner [219023786529f0d2b2e8db1c37d04dd25946c1f17c1199c8669d4d942666f005] <==
	I1219 03:05:29.083053       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1219 03:05:29.093235       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1219 03:05:29.093289       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1219 03:05:46.492337       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1219 03:05:46.492464       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6a8ba3d9-8dbe-480f-9c35-c4b324977dc6", APIVersion:"v1", ResourceVersion:"825", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-002036_45c5b5d0-fab5-41e7-a9c3-e0b08402b1a5 became leader
	I1219 03:05:46.492521       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-002036_45c5b5d0-fab5-41e7-a9c3-e0b08402b1a5!
	I1219 03:05:46.593295       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-002036_45c5b5d0-fab5-41e7-a9c3-e0b08402b1a5!
	
	
	==> storage-provisioner [27b2e16e5c09e9cff4cce562e7b84a5be956640cf474813346451004c553041c] <==
	I1219 03:04:45.663799       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1219 03:05:15.666834       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-002036 -n old-k8s-version-002036
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-002036 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: metrics-server-57f55c9bc5-jjqwh kubernetes-dashboard-auth-745d5d46bb-rkfcv kubernetes-dashboard-kong-f487b85cd-9xprh kubernetes-dashboard-metrics-scraper-6b5c7dc479-4krct
helpers_test.go:283: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context old-k8s-version-002036 describe pod metrics-server-57f55c9bc5-jjqwh kubernetes-dashboard-auth-745d5d46bb-rkfcv kubernetes-dashboard-kong-f487b85cd-9xprh kubernetes-dashboard-metrics-scraper-6b5c7dc479-4krct
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context old-k8s-version-002036 describe pod metrics-server-57f55c9bc5-jjqwh kubernetes-dashboard-auth-745d5d46bb-rkfcv kubernetes-dashboard-kong-f487b85cd-9xprh kubernetes-dashboard-metrics-scraper-6b5c7dc479-4krct: exit status 1 (67.075373ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-jjqwh" not found
	Error from server (NotFound): pods "kubernetes-dashboard-auth-745d5d46bb-rkfcv" not found
	Error from server (NotFound): pods "kubernetes-dashboard-kong-f487b85cd-9xprh" not found
	Error from server (NotFound): pods "kubernetes-dashboard-metrics-scraper-6b5c7dc479-4krct" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context old-k8s-version-002036 describe pod metrics-server-57f55c9bc5-jjqwh kubernetes-dashboard-auth-745d5d46bb-rkfcv kubernetes-dashboard-kong-f487b85cd-9xprh kubernetes-dashboard-metrics-scraper-6b5c7dc479-4krct: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (543.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (542.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:338: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:285: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-536489 -n embed-certs-536489
start_stop_delete_test.go:285: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-12-19 03:28:58.716669159 +0000 UTC m=+3815.709793433
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-536489 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context embed-certs-536489 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (66.324501ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "dashboard-metrics-scraper" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-536489 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-536489
helpers_test.go:244: (dbg) docker inspect embed-certs-536489:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0a12a246db9ea78b9db1ef0e13288ded144ec4a62c92ad45270d3a17a9d87b31",
	        "Created": "2025-12-19T03:03:37.532560338Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 568553,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-19T03:04:40.119931988Z",
	            "FinishedAt": "2025-12-19T03:04:39.135554624Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/0a12a246db9ea78b9db1ef0e13288ded144ec4a62c92ad45270d3a17a9d87b31/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0a12a246db9ea78b9db1ef0e13288ded144ec4a62c92ad45270d3a17a9d87b31/hostname",
	        "HostsPath": "/var/lib/docker/containers/0a12a246db9ea78b9db1ef0e13288ded144ec4a62c92ad45270d3a17a9d87b31/hosts",
	        "LogPath": "/var/lib/docker/containers/0a12a246db9ea78b9db1ef0e13288ded144ec4a62c92ad45270d3a17a9d87b31/0a12a246db9ea78b9db1ef0e13288ded144ec4a62c92ad45270d3a17a9d87b31-json.log",
	        "Name": "/embed-certs-536489",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-536489:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-536489",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0a12a246db9ea78b9db1ef0e13288ded144ec4a62c92ad45270d3a17a9d87b31",
	                "LowerDir": "/var/lib/docker/overlay2/1779675f3732594445db4c9a57aa5e82b9afc86b77057330369723c206eb251e-init/diff:/var/lib/docker/overlay2/68e8325308c9e4650215fd35d4b00e1f54e6ac5929641a1bc8ed2d512448afbd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1779675f3732594445db4c9a57aa5e82b9afc86b77057330369723c206eb251e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1779675f3732594445db4c9a57aa5e82b9afc86b77057330369723c206eb251e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1779675f3732594445db4c9a57aa5e82b9afc86b77057330369723c206eb251e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-536489",
	                "Source": "/var/lib/docker/volumes/embed-certs-536489/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-536489",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-536489",
	                "name.minikube.sigs.k8s.io": "embed-certs-536489",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "62dfef23b82a9174bb43617105479520225e65d2827cb4760f18a3c40bd5051d",
	            "SandboxKey": "/var/run/docker/netns/62dfef23b82a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-536489": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9183f6f1b01c9bf1232449e4edccbafc7ca8c7340f355e79ef181320c71bc1bf",
	                    "EndpointID": "cefbdced7c77040bacd3818765fca7955502ceb83fa96908456634fab1d699c8",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "ee:9d:cc:4a:af:61",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-536489",
	                        "0a12a246db9e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-536489 -n embed-certs-536489
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-536489 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-536489 logs -n 25: (1.512387647s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───
──────────────────┐
	│ COMMAND │                                                                                                                           ARGS                                                                                                                           │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───
──────────────────┤
	│ stop    │ -p old-k8s-version-002036 --alsologtostderr -v=3                                                                                                                                                                                                         │ old-k8s-version-002036       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ addons  │ enable metrics-server -p embed-certs-536489 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                 │ embed-certs-536489           │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ stop    │ -p embed-certs-536489 --alsologtostderr -v=3                                                                                                                                                                                                             │ embed-certs-536489           │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ addons  │ enable metrics-server -p no-preload-208281 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ no-preload-208281            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ stop    │ -p no-preload-208281 --alsologtostderr -v=3                                                                                                                                                                                                              │ no-preload-208281            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-002036 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ old-k8s-version-002036       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p old-k8s-version-002036 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0      │ old-k8s-version-002036       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-103644 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ default-k8s-diff-port-103644 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ stop    │ -p default-k8s-diff-port-103644 --alsologtostderr -v=3                                                                                                                                                                                                   │ default-k8s-diff-port-103644 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ addons  │ enable dashboard -p embed-certs-536489 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                            │ embed-certs-536489           │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p embed-certs-536489 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-536489           │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:10 UTC │
	│ addons  │ enable dashboard -p no-preload-208281 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                             │ no-preload-208281            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p no-preload-208281 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-208281            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:10 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-103644 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ default-k8s-diff-port-103644 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p default-k8s-diff-port-103644 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-103644 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:11 UTC │
	│ image   │ old-k8s-version-002036 image list --format=json                                                                                                                                                                                                          │ old-k8s-version-002036       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:23 UTC │
	│ pause   │ -p old-k8s-version-002036 --alsologtostderr -v=1                                                                                                                                                                                                         │ old-k8s-version-002036       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:23 UTC │
	│ unpause │ -p old-k8s-version-002036 --alsologtostderr -v=1                                                                                                                                                                                                         │ old-k8s-version-002036       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:23 UTC │
	│ delete  │ -p old-k8s-version-002036                                                                                                                                                                                                                                │ old-k8s-version-002036       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:23 UTC │
	│ delete  │ -p old-k8s-version-002036                                                                                                                                                                                                                                │ old-k8s-version-002036       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:23 UTC │
	│ start   │ -p newest-cni-017890 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-017890            │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:23 UTC │
	│ addons  │ enable metrics-server -p newest-cni-017890 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ newest-cni-017890            │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:23 UTC │
	│ stop    │ -p newest-cni-017890 --alsologtostderr -v=3                                                                                                                                                                                                              │ newest-cni-017890            │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:24 UTC │
	│ addons  │ enable dashboard -p newest-cni-017890 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                             │ newest-cni-017890            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ start   │ -p newest-cni-017890 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-017890            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───
──────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 03:24:01
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 03:24:01.551556  597843 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:24:01.551710  597843 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:24:01.551722  597843 out.go:374] Setting ErrFile to fd 2...
	I1219 03:24:01.551907  597843 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:24:01.552523  597843 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-253859/.minikube/bin
	I1219 03:24:01.553444  597843 out.go:368] Setting JSON to false
	I1219 03:24:01.554824  597843 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7581,"bootTime":1766107061,"procs":354,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 03:24:01.554941  597843 start.go:143] virtualization: kvm guest
	I1219 03:24:01.556480  597843 out.go:179] * [newest-cni-017890] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 03:24:01.557959  597843 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 03:24:01.558051  597843 notify.go:221] Checking for updates...
	I1219 03:24:01.560013  597843 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 03:24:01.561074  597843 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-253859/kubeconfig
	I1219 03:24:01.562040  597843 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-253859/.minikube
	I1219 03:24:01.563073  597843 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 03:24:01.564045  597843 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 03:24:01.565416  597843 config.go:182] Loaded profile config "newest-cni-017890": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1219 03:24:01.565980  597843 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 03:24:01.591006  597843 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 03:24:01.591163  597843 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:24:01.654185  597843 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-19 03:24:01.642935504 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:24:01.654289  597843 docker.go:319] overlay module found
	I1219 03:24:01.655871  597843 out.go:179] * Using the docker driver based on existing profile
	I1219 03:24:01.657020  597843 start.go:309] selected driver: docker
	I1219 03:24:01.657040  597843 start.go:928] validating driver "docker" against &{Name:newest-cni-017890 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-017890 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:24:01.657177  597843 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 03:24:01.657853  597843 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:24:01.714604  597843 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-19 03:24:01.704751605 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:24:01.714967  597843 start_flags.go:1012] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1219 03:24:01.715007  597843 cni.go:84] Creating CNI manager for ""
	I1219 03:24:01.715105  597843 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1219 03:24:01.715196  597843 start.go:353] cluster config:
	{Name:newest-cni-017890 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-017890 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:24:01.716916  597843 out.go:179] * Starting "newest-cni-017890" primary control-plane node in "newest-cni-017890" cluster
	I1219 03:24:01.717773  597843 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1219 03:24:01.718635  597843 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1219 03:24:01.719569  597843 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1219 03:24:01.719625  597843 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1219 03:24:01.719643  597843 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-253859/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-amd64.tar.lz4
	I1219 03:24:01.719660  597843 cache.go:65] Caching tarball of preloaded images
	I1219 03:24:01.719745  597843 preload.go:238] Found /home/jenkins/minikube-integration/22230-253859/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1219 03:24:01.719760  597843 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1219 03:24:01.719887  597843 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/newest-cni-017890/config.json ...
	I1219 03:24:01.740614  597843 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1219 03:24:01.740636  597843 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1219 03:24:01.740658  597843 cache.go:243] Successfully downloaded all kic artifacts
	I1219 03:24:01.740701  597843 start.go:360] acquireMachinesLock for newest-cni-017890: {Name:mk26fbc65f425d2942ec43638d9c096d91448606 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:24:01.740774  597843 start.go:364] duration metric: took 45.332µs to acquireMachinesLock for "newest-cni-017890"
	I1219 03:24:01.740800  597843 start.go:96] Skipping create...Using existing machine configuration
	I1219 03:24:01.740810  597843 fix.go:54] fixHost starting: 
	I1219 03:24:01.741087  597843 cli_runner.go:164] Run: docker container inspect newest-cni-017890 --format={{.State.Status}}
	I1219 03:24:01.759521  597843 fix.go:112] recreateIfNeeded on newest-cni-017890: state=Stopped err=<nil>
	W1219 03:24:01.759565  597843 fix.go:138] unexpected machine state, will restart: <nil>
	I1219 03:24:01.761307  597843 out.go:252] * Restarting existing docker container for "newest-cni-017890" ...
	I1219 03:24:01.761400  597843 cli_runner.go:164] Run: docker start newest-cni-017890
	I1219 03:24:02.017076  597843 cli_runner.go:164] Run: docker container inspect newest-cni-017890 --format={{.State.Status}}
	I1219 03:24:02.038183  597843 kic.go:430] container "newest-cni-017890" state is running.
	I1219 03:24:02.038784  597843 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-017890
	I1219 03:24:02.057901  597843 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/newest-cni-017890/config.json ...
	I1219 03:24:02.058153  597843 machine.go:94] provisionDockerMachine start ...
	I1219 03:24:02.058257  597843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-017890
	I1219 03:24:02.077426  597843 main.go:144] libmachine: Using SSH client type: native
	I1219 03:24:02.077827  597843 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1219 03:24:02.077849  597843 main.go:144] libmachine: About to run SSH command:
	hostname
	I1219 03:24:02.078525  597843 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58000->127.0.0.1:33108: read: connection reset by peer
	I1219 03:24:05.226915  597843 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-017890
	
	I1219 03:24:05.226951  597843 ubuntu.go:182] provisioning hostname "newest-cni-017890"
	I1219 03:24:05.227032  597843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-017890
	I1219 03:24:05.245436  597843 main.go:144] libmachine: Using SSH client type: native
	I1219 03:24:05.245691  597843 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1219 03:24:05.245713  597843 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-017890 && echo "newest-cni-017890" | sudo tee /etc/hostname
	I1219 03:24:05.401463  597843 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-017890
	
	I1219 03:24:05.401547  597843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-017890
	I1219 03:24:05.420728  597843 main.go:144] libmachine: Using SSH client type: native
	I1219 03:24:05.420981  597843 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1219 03:24:05.420998  597843 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-017890' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-017890/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-017890' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 03:24:05.565256  597843 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:24:05.565300  597843 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22230-253859/.minikube CaCertPath:/home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22230-253859/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22230-253859/.minikube}
	I1219 03:24:05.565337  597843 ubuntu.go:190] setting up certificates
	I1219 03:24:05.565349  597843 provision.go:84] configureAuth start
	I1219 03:24:05.565402  597843 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-017890
	I1219 03:24:05.584718  597843 provision.go:143] copyHostCerts
	I1219 03:24:05.584776  597843 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-253859/.minikube/ca.pem, removing ...
	I1219 03:24:05.584792  597843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-253859/.minikube/ca.pem
	I1219 03:24:05.584865  597843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22230-253859/.minikube/ca.pem (1078 bytes)
	I1219 03:24:05.585061  597843 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-253859/.minikube/cert.pem, removing ...
	I1219 03:24:05.585075  597843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-253859/.minikube/cert.pem
	I1219 03:24:05.585118  597843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22230-253859/.minikube/cert.pem (1123 bytes)
	I1219 03:24:05.585196  597843 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-253859/.minikube/key.pem, removing ...
	I1219 03:24:05.585204  597843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-253859/.minikube/key.pem
	I1219 03:24:05.585229  597843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22230-253859/.minikube/key.pem (1675 bytes)
	I1219 03:24:05.585291  597843 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22230-253859/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca-key.pem org=jenkins.newest-cni-017890 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-017890]
	I1219 03:24:05.728004  597843 provision.go:177] copyRemoteCerts
	I1219 03:24:05.728065  597843 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 03:24:05.728101  597843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-017890
	I1219 03:24:05.746601  597843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/newest-cni-017890/id_rsa Username:docker}
	I1219 03:24:05.850371  597843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1219 03:24:05.868683  597843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1219 03:24:05.887271  597843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1219 03:24:05.907779  597843 provision.go:87] duration metric: took 342.411585ms to configureAuth
	I1219 03:24:05.907808  597843 ubuntu.go:206] setting minikube options for container-runtime
	I1219 03:24:05.908005  597843 config.go:182] Loaded profile config "newest-cni-017890": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1219 03:24:05.908020  597843 machine.go:97] duration metric: took 3.849847847s to provisionDockerMachine
	I1219 03:24:05.908029  597843 start.go:293] postStartSetup for "newest-cni-017890" (driver="docker")
	I1219 03:24:05.908040  597843 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 03:24:05.908082  597843 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 03:24:05.908126  597843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-017890
	I1219 03:24:05.926181  597843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/newest-cni-017890/id_rsa Username:docker}
	I1219 03:24:06.031728  597843 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 03:24:06.035533  597843 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1219 03:24:06.035563  597843 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1219 03:24:06.035576  597843 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-253859/.minikube/addons for local assets ...
	I1219 03:24:06.035658  597843 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-253859/.minikube/files for local assets ...
	I1219 03:24:06.035762  597843 filesync.go:149] local asset: /home/jenkins/minikube-integration/22230-253859/.minikube/files/etc/ssl/certs/2574932.pem -> 2574932.pem in /etc/ssl/certs
	I1219 03:24:06.035894  597843 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 03:24:06.044118  597843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/files/etc/ssl/certs/2574932.pem --> /etc/ssl/certs/2574932.pem (1708 bytes)
	I1219 03:24:06.063122  597843 start.go:296] duration metric: took 155.077572ms for postStartSetup
	I1219 03:24:06.063208  597843 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 03:24:06.063254  597843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-017890
	I1219 03:24:06.081904  597843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/newest-cni-017890/id_rsa Username:docker}
	I1219 03:24:06.182234  597843 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1219 03:24:06.187009  597843 fix.go:56] duration metric: took 4.446188945s for fixHost
	I1219 03:24:06.187038  597843 start.go:83] releasing machines lock for "newest-cni-017890", held for 4.446249432s
	I1219 03:24:06.187140  597843 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-017890
	I1219 03:24:06.205248  597843 ssh_runner.go:195] Run: cat /version.json
	I1219 03:24:06.205318  597843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-017890
	I1219 03:24:06.205338  597843 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 03:24:06.205413  597843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-017890
	I1219 03:24:06.224981  597843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/newest-cni-017890/id_rsa Username:docker}
	I1219 03:24:06.225198  597843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/newest-cni-017890/id_rsa Username:docker}
	I1219 03:24:06.387880  597843 ssh_runner.go:195] Run: systemctl --version
	I1219 03:24:06.395185  597843 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 03:24:06.399905  597843 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 03:24:06.399981  597843 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 03:24:06.408633  597843 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1219 03:24:06.408657  597843 start.go:496] detecting cgroup driver to use...
	I1219 03:24:06.408690  597843 detect.go:190] detected "systemd" cgroup driver on host os
	I1219 03:24:06.408738  597843 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1219 03:24:06.425646  597843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1219 03:24:06.438969  597843 docker.go:218] disabling cri-docker service (if available) ...
	I1219 03:24:06.439029  597843 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1219 03:24:06.454372  597843 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1219 03:24:06.467895  597843 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1219 03:24:06.550002  597843 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1219 03:24:06.635564  597843 docker.go:234] disabling docker service ...
	I1219 03:24:06.635686  597843 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1219 03:24:06.651872  597843 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1219 03:24:06.665206  597843 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1219 03:24:06.751936  597843 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1219 03:24:06.835058  597843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1219 03:24:06.847793  597843 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 03:24:06.862660  597843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1219 03:24:06.872135  597843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1219 03:24:06.881406  597843 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1219 03:24:06.881467  597843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1219 03:24:06.891421  597843 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1219 03:24:06.900927  597843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1219 03:24:06.910491  597843 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1219 03:24:06.919635  597843 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 03:24:06.927769  597843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1219 03:24:06.936983  597843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1219 03:24:06.946749  597843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1219 03:24:06.956517  597843 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 03:24:06.964790  597843 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 03:24:06.973869  597843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:24:07.059555  597843 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1219 03:24:07.167126  597843 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1219 03:24:07.167191  597843 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1219 03:24:07.171613  597843 start.go:564] Will wait 60s for crictl version
	I1219 03:24:07.171670  597843 ssh_runner.go:195] Run: which crictl
	I1219 03:24:07.175300  597843 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1219 03:24:07.201572  597843 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1219 03:24:07.201669  597843 ssh_runner.go:195] Run: containerd --version
	I1219 03:24:07.224745  597843 ssh_runner.go:195] Run: containerd --version
	I1219 03:24:07.248441  597843 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1219 03:24:07.249524  597843 cli_runner.go:164] Run: docker network inspect newest-cni-017890 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1219 03:24:07.267447  597843 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1219 03:24:07.271636  597843 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:24:07.283419  597843 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1219 03:24:07.284366  597843 kubeadm.go:884] updating cluster {Name:newest-cni-017890 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-017890 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountS
tring: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 03:24:07.284534  597843 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1219 03:24:07.284616  597843 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:24:07.311719  597843 containerd.go:627] all images are preloaded for containerd runtime.
	I1219 03:24:07.311745  597843 containerd.go:534] Images already preloaded, skipping extraction
	I1219 03:24:07.311808  597843 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:24:07.338880  597843 containerd.go:627] all images are preloaded for containerd runtime.
	I1219 03:24:07.338904  597843 cache_images.go:86] Images are preloaded, skipping loading
	I1219 03:24:07.338911  597843 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1219 03:24:07.339018  597843 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-017890 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-017890 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 03:24:07.339070  597843 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1219 03:24:07.366831  597843 cni.go:84] Creating CNI manager for ""
	I1219 03:24:07.366854  597843 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1219 03:24:07.366877  597843 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1219 03:24:07.366914  597843 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-017890 NodeName:newest-cni-017890 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 03:24:07.367047  597843 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-017890"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 03:24:07.367129  597843 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1219 03:24:07.375406  597843 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 03:24:07.375480  597843 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 03:24:07.383556  597843 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1219 03:24:07.397253  597843 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1219 03:24:07.410328  597843 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1219 03:24:07.422986  597843 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1219 03:24:07.426716  597843 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:24:07.436890  597843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:24:07.521758  597843 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:24:07.548164  597843 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/newest-cni-017890 for IP: 192.168.103.2
	I1219 03:24:07.548185  597843 certs.go:195] generating shared ca certs ...
	I1219 03:24:07.548199  597843 certs.go:227] acquiring lock for ca certs: {Name:mk50e31410087b5c6cdb0986368a8c9100618403 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:07.548383  597843 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-253859/.minikube/ca.key
	I1219 03:24:07.548436  597843 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-253859/.minikube/proxy-client-ca.key
	I1219 03:24:07.548447  597843 certs.go:257] generating profile certs ...
	I1219 03:24:07.548530  597843 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/newest-cni-017890/client.key
	I1219 03:24:07.548623  597843 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/newest-cni-017890/apiserver.key.6f00a9c9
	I1219 03:24:07.548670  597843 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/newest-cni-017890/proxy-client.key
	I1219 03:24:07.548771  597843 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/257493.pem (1338 bytes)
	W1219 03:24:07.548802  597843 certs.go:480] ignoring /home/jenkins/minikube-integration/22230-253859/.minikube/certs/257493_empty.pem, impossibly tiny 0 bytes
	I1219 03:24:07.548812  597843 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca-key.pem (1679 bytes)
	I1219 03:24:07.548850  597843 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca.pem (1078 bytes)
	I1219 03:24:07.548874  597843 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/cert.pem (1123 bytes)
	I1219 03:24:07.548909  597843 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/key.pem (1675 bytes)
	I1219 03:24:07.548958  597843 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/files/etc/ssl/certs/2574932.pem (1708 bytes)
	I1219 03:24:07.549552  597843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 03:24:07.570143  597843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 03:24:07.590219  597843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 03:24:07.610623  597843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1219 03:24:07.634629  597843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/newest-cni-017890/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1219 03:24:07.657557  597843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/newest-cni-017890/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1219 03:24:07.676631  597843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/newest-cni-017890/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 03:24:07.694598  597843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/newest-cni-017890/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1219 03:24:07.712959  597843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 03:24:07.731244  597843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/certs/257493.pem --> /usr/share/ca-certificates/257493.pem (1338 bytes)
	I1219 03:24:07.750319  597843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/files/etc/ssl/certs/2574932.pem --> /usr/share/ca-certificates/2574932.pem (1708 bytes)
	I1219 03:24:07.771334  597843 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 03:24:07.785765  597843 ssh_runner.go:195] Run: openssl version
	I1219 03:24:07.793440  597843 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/257493.pem
	I1219 03:24:07.801622  597843 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/257493.pem /etc/ssl/certs/257493.pem
	I1219 03:24:07.809393  597843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/257493.pem
	I1219 03:24:07.813382  597843 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 19 02:34 /usr/share/ca-certificates/257493.pem
	I1219 03:24:07.813449  597843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/257493.pem
	I1219 03:24:07.850229  597843 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1219 03:24:07.858513  597843 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2574932.pem
	I1219 03:24:07.865970  597843 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2574932.pem /etc/ssl/certs/2574932.pem
	I1219 03:24:07.873380  597843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2574932.pem
	I1219 03:24:07.877201  597843 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 19 02:34 /usr/share/ca-certificates/2574932.pem
	I1219 03:24:07.877255  597843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2574932.pem
	I1219 03:24:07.913110  597843 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1219 03:24:07.921263  597843 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:24:07.930204  597843 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 03:24:07.938297  597843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:24:07.942462  597843 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:26 /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:24:07.942530  597843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:24:07.979726  597843 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 03:24:07.987766  597843 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 03:24:07.991691  597843 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1219 03:24:08.028949  597843 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1219 03:24:08.064356  597843 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1219 03:24:08.116711  597843 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1219 03:24:08.176379  597843 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1219 03:24:08.232783  597843 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1219 03:24:08.284707  597843 kubeadm.go:401] StartCluster: {Name:newest-cni-017890 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-017890 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountStri
ng: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:24:08.284882  597843 cri.go:57] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1219 03:24:08.284957  597843 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:24:08.333947  597843 cri.go:92] found id: "83c773e18493117a84ba6126e570660dda5e21ff831cb599f1a6b9387970232e"
	I1219 03:24:08.333977  597843 cri.go:92] found id: "979a7c551bb4e02d089f4c1e13102bf59ffd1b8be4a106c33d47bbdea6aab91c"
	I1219 03:24:08.333982  597843 cri.go:92] found id: "53656077c17e3d1cee84f578c4df1231a9fd6794af10228debb28e2afd0744dd"
	I1219 03:24:08.333986  597843 cri.go:92] found id: "8f86dd795e5932f8f4677c0dedd84373f623a55027285f0293f430f630177b5f"
	I1219 03:24:08.333990  597843 cri.go:92] found id: "acb96d7d74ab88f884b6bcdeb54e82450832b1f30782d5bfa67316f6ec297f53"
	I1219 03:24:08.333994  597843 cri.go:92] found id: "73fb53cbfdbfe23af6b14e5ace7d3ef32097074620b3f59326a8bb6ef351fc41"
	I1219 03:24:08.333998  597843 cri.go:92] found id: "aa62d373b4db03ea13900f92ce5fa0b1c75228547e72f0b8988617a9423b8531"
	I1219 03:24:08.334011  597843 cri.go:92] found id: "034ac5a3e39903cc61975f0b912f650b8223647d9830ebc9479f6cc938e8a264"
	I1219 03:24:08.334015  597843 cri.go:92] found id: "a3c4dff33f0219049afb9aaf4cf5be196e7dcb3d952e9ea0062808a832ca66bd"
	I1219 03:24:08.334025  597843 cri.go:92] found id: ""
	I1219 03:24:08.334078  597843 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1219 03:24:08.366112  597843 cri.go:119] JSON = [{"ociVersion":"1.2.1","id":"36c65629ff3db0c00597a0df765319f1a22752b28262d4bfc1d334b926fa257f","pid":832,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/36c65629ff3db0c00597a0df765319f1a22752b28262d4bfc1d334b926fa257f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/36c65629ff3db0c00597a0df765319f1a22752b28262d4bfc1d334b926fa257f/rootfs","created":"2025-12-19T03:24:08.141421132Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"36c65629ff3db0c00597a0df765319f1a22752b28262d4bfc1d334b926fa257f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-newest-cni-017890_ed97dfd8df93f9e7cb623b478ae52abb","io.kubernetes.cri.sand
box-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-newest-cni-017890","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"ed97dfd8df93f9e7cb623b478ae52abb"},"owner":"root"},{"ociVersion":"1.2.1","id":"53656077c17e3d1cee84f578c4df1231a9fd6794af10228debb28e2afd0744dd","pid":946,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/53656077c17e3d1cee84f578c4df1231a9fd6794af10228debb28e2afd0744dd","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/53656077c17e3d1cee84f578c4df1231a9fd6794af10228debb28e2afd0744dd/rootfs","created":"2025-12-19T03:24:08.277254842Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.35.0-rc.1","io.kubernetes.cri.sandbox-id":"7ae918824c51dd446480e7e24f8a6d7095c8b146c5aa20de5db0c2312a74668d","io.kubernetes.cri.sandbox-name":"kube-apiserver-newest-cni-017890","io.kub
ernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"9dc1978d92ca7a4bd42b11590dd8157f"},"owner":"root"},{"ociVersion":"1.2.1","id":"7ae918824c51dd446480e7e24f8a6d7095c8b146c5aa20de5db0c2312a74668d","pid":819,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7ae918824c51dd446480e7e24f8a6d7095c8b146c5aa20de5db0c2312a74668d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7ae918824c51dd446480e7e24f8a6d7095c8b146c5aa20de5db0c2312a74668d/rootfs","created":"2025-12-19T03:24:08.136706319Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"7ae918824c51dd446480e7e24f8a6d7095c8b146c5aa20de5db0c2312a74668d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-newest-cni-017
890_9dc1978d92ca7a4bd42b11590dd8157f","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-newest-cni-017890","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"9dc1978d92ca7a4bd42b11590dd8157f"},"owner":"root"},{"ociVersion":"1.2.1","id":"83c773e18493117a84ba6126e570660dda5e21ff831cb599f1a6b9387970232e","pid":985,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/83c773e18493117a84ba6126e570660dda5e21ff831cb599f1a6b9387970232e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/83c773e18493117a84ba6126e570660dda5e21ff831cb599f1a6b9387970232e/rootfs","created":"2025-12-19T03:24:08.296298017Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.6.6-0","io.kubernetes.cri.sandbox-id":"b7b01a51aff40ded1de11025c04b4baf9849da2ab04bda586c00d46fc62f4e46","io.kubernetes.cri.sandbox-name":"etcd-newest-cn
i-017890","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"9bba4882296cd6b19c27ea1449a86cb4"},"owner":"root"},{"ociVersion":"1.2.1","id":"8f86dd795e5932f8f4677c0dedd84373f623a55027285f0293f430f630177b5f","pid":939,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8f86dd795e5932f8f4677c0dedd84373f623a55027285f0293f430f630177b5f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8f86dd795e5932f8f4677c0dedd84373f623a55027285f0293f430f630177b5f/rootfs","created":"2025-12-19T03:24:08.275699538Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.35.0-rc.1","io.kubernetes.cri.sandbox-id":"36c65629ff3db0c00597a0df765319f1a22752b28262d4bfc1d334b926fa257f","io.kubernetes.cri.sandbox-name":"kube-controller-manager-newest-cni-017890","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kuberne
tes.cri.sandbox-uid":"ed97dfd8df93f9e7cb623b478ae52abb"},"owner":"root"},{"ociVersion":"1.2.1","id":"979a7c551bb4e02d089f4c1e13102bf59ffd1b8be4a106c33d47bbdea6aab91c","pid":977,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/979a7c551bb4e02d089f4c1e13102bf59ffd1b8be4a106c33d47bbdea6aab91c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/979a7c551bb4e02d089f4c1e13102bf59ffd1b8be4a106c33d47bbdea6aab91c/rootfs","created":"2025-12-19T03:24:08.280803642Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.35.0-rc.1","io.kubernetes.cri.sandbox-id":"d8f4f219d9857ab41382ecd4e2b1eea546ef326c4c32b3cdaa2da5afeadc71fa","io.kubernetes.cri.sandbox-name":"kube-scheduler-newest-cni-017890","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"d7a25d2998d02ae38bf3d0b066059a6f"},"owner":"root"},{"ociVersion":"1.2.1","id":
"b7b01a51aff40ded1de11025c04b4baf9849da2ab04bda586c00d46fc62f4e46","pid":877,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b7b01a51aff40ded1de11025c04b4baf9849da2ab04bda586c00d46fc62f4e46","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b7b01a51aff40ded1de11025c04b4baf9849da2ab04bda586c00d46fc62f4e46/rootfs","created":"2025-12-19T03:24:08.165286073Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"b7b01a51aff40ded1de11025c04b4baf9849da2ab04bda586c00d46fc62f4e46","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-newest-cni-017890_9bba4882296cd6b19c27ea1449a86cb4","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-newest-cni-017890","io.kubernetes.cri.sandbox-name
space":"kube-system","io.kubernetes.cri.sandbox-uid":"9bba4882296cd6b19c27ea1449a86cb4"},"owner":"root"},{"ociVersion":"1.2.1","id":"d8f4f219d9857ab41382ecd4e2b1eea546ef326c4c32b3cdaa2da5afeadc71fa","pid":870,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d8f4f219d9857ab41382ecd4e2b1eea546ef326c4c32b3cdaa2da5afeadc71fa","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d8f4f219d9857ab41382ecd4e2b1eea546ef326c4c32b3cdaa2da5afeadc71fa/rootfs","created":"2025-12-19T03:24:08.164795348Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"d8f4f219d9857ab41382ecd4e2b1eea546ef326c4c32b3cdaa2da5afeadc71fa","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-newest-cni-017890_d7a25d2998d02ae38bf3
d0b066059a6f","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-newest-cni-017890","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"d7a25d2998d02ae38bf3d0b066059a6f"},"owner":"root"}]
	I1219 03:24:08.366321  597843 cri.go:129] list returned 8 containers
	I1219 03:24:08.366344  597843 cri.go:132] container: {ID:36c65629ff3db0c00597a0df765319f1a22752b28262d4bfc1d334b926fa257f Status:running}
	I1219 03:24:08.366363  597843 cri.go:134] skipping 36c65629ff3db0c00597a0df765319f1a22752b28262d4bfc1d334b926fa257f - not in ps
	I1219 03:24:08.366370  597843 cri.go:132] container: {ID:53656077c17e3d1cee84f578c4df1231a9fd6794af10228debb28e2afd0744dd Status:running}
	I1219 03:24:08.366379  597843 cri.go:138] skipping {53656077c17e3d1cee84f578c4df1231a9fd6794af10228debb28e2afd0744dd running}: state = "running", want "paused"
	I1219 03:24:08.366388  597843 cri.go:132] container: {ID:7ae918824c51dd446480e7e24f8a6d7095c8b146c5aa20de5db0c2312a74668d Status:running}
	I1219 03:24:08.366394  597843 cri.go:134] skipping 7ae918824c51dd446480e7e24f8a6d7095c8b146c5aa20de5db0c2312a74668d - not in ps
	I1219 03:24:08.366399  597843 cri.go:132] container: {ID:83c773e18493117a84ba6126e570660dda5e21ff831cb599f1a6b9387970232e Status:running}
	I1219 03:24:08.366407  597843 cri.go:138] skipping {83c773e18493117a84ba6126e570660dda5e21ff831cb599f1a6b9387970232e running}: state = "running", want "paused"
	I1219 03:24:08.366413  597843 cri.go:132] container: {ID:8f86dd795e5932f8f4677c0dedd84373f623a55027285f0293f430f630177b5f Status:running}
	I1219 03:24:08.366419  597843 cri.go:138] skipping {8f86dd795e5932f8f4677c0dedd84373f623a55027285f0293f430f630177b5f running}: state = "running", want "paused"
	I1219 03:24:08.366426  597843 cri.go:132] container: {ID:979a7c551bb4e02d089f4c1e13102bf59ffd1b8be4a106c33d47bbdea6aab91c Status:running}
	I1219 03:24:08.366433  597843 cri.go:138] skipping {979a7c551bb4e02d089f4c1e13102bf59ffd1b8be4a106c33d47bbdea6aab91c running}: state = "running", want "paused"
	I1219 03:24:08.366439  597843 cri.go:132] container: {ID:b7b01a51aff40ded1de11025c04b4baf9849da2ab04bda586c00d46fc62f4e46 Status:running}
	I1219 03:24:08.366446  597843 cri.go:134] skipping b7b01a51aff40ded1de11025c04b4baf9849da2ab04bda586c00d46fc62f4e46 - not in ps
	I1219 03:24:08.366456  597843 cri.go:132] container: {ID:d8f4f219d9857ab41382ecd4e2b1eea546ef326c4c32b3cdaa2da5afeadc71fa Status:running}
	I1219 03:24:08.366463  597843 cri.go:134] skipping d8f4f219d9857ab41382ecd4e2b1eea546ef326c4c32b3cdaa2da5afeadc71fa - not in ps
	I1219 03:24:08.366521  597843 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1219 03:24:08.380825  597843 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1219 03:24:08.380845  597843 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1219 03:24:08.380896  597843 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1219 03:24:08.391924  597843 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1219 03:24:08.393405  597843 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-017890" does not appear in /home/jenkins/minikube-integration/22230-253859/kubeconfig
	I1219 03:24:08.394392  597843 kubeconfig.go:62] /home/jenkins/minikube-integration/22230-253859/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-017890" cluster setting kubeconfig missing "newest-cni-017890" context setting]
	I1219 03:24:08.395869  597843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-253859/kubeconfig: {Name:mk882428f840659847d4e22cffee2d7775067610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:08.398165  597843 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1219 03:24:08.408442  597843 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1219 03:24:08.408473  597843 kubeadm.go:602] duration metric: took 27.622322ms to restartPrimaryControlPlane
	I1219 03:24:08.408480  597843 kubeadm.go:403] duration metric: took 123.791788ms to StartCluster
	I1219 03:24:08.408494  597843 settings.go:142] acquiring lock: {Name:mkabb1ebf75b28a37c7b2b053110889b555ff453 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:08.408540  597843 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-253859/kubeconfig
	I1219 03:24:08.410289  597843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-253859/kubeconfig: {Name:mk882428f840659847d4e22cffee2d7775067610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:08.410549  597843 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1219 03:24:08.410662  597843 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:24:08.410772  597843 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-017890"
	I1219 03:24:08.410804  597843 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-017890"
	I1219 03:24:08.410797  597843 addons.go:70] Setting default-storageclass=true in profile "newest-cni-017890"
	I1219 03:24:08.410808  597843 addons.go:70] Setting dashboard=true in profile "newest-cni-017890"
	I1219 03:24:08.410827  597843 addons.go:70] Setting metrics-server=true in profile "newest-cni-017890"
	I1219 03:24:08.410838  597843 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-017890"
	I1219 03:24:08.410841  597843 addons.go:239] Setting addon metrics-server=true in "newest-cni-017890"
	I1219 03:24:08.410850  597843 addons.go:239] Setting addon dashboard=true in "newest-cni-017890"
	W1219 03:24:08.410851  597843 addons.go:248] addon metrics-server should already be in state true
	I1219 03:24:08.410859  597843 config.go:182] Loaded profile config "newest-cni-017890": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	W1219 03:24:08.410861  597843 addons.go:248] addon dashboard should already be in state true
	W1219 03:24:08.410813  597843 addons.go:248] addon storage-provisioner should already be in state true
	I1219 03:24:08.410917  597843 host.go:66] Checking if "newest-cni-017890" exists ...
	I1219 03:24:08.410947  597843 host.go:66] Checking if "newest-cni-017890" exists ...
	I1219 03:24:08.410911  597843 host.go:66] Checking if "newest-cni-017890" exists ...
	I1219 03:24:08.411222  597843 cli_runner.go:164] Run: docker container inspect newest-cni-017890 --format={{.State.Status}}
	I1219 03:24:08.411370  597843 cli_runner.go:164] Run: docker container inspect newest-cni-017890 --format={{.State.Status}}
	I1219 03:24:08.411453  597843 cli_runner.go:164] Run: docker container inspect newest-cni-017890 --format={{.State.Status}}
	I1219 03:24:08.411453  597843 cli_runner.go:164] Run: docker container inspect newest-cni-017890 --format={{.State.Status}}
	I1219 03:24:08.415989  597843 out.go:179] * Verifying Kubernetes components...
	I1219 03:24:08.417155  597843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:24:08.439554  597843 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1219 03:24:08.439613  597843 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:24:08.439993  597843 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:24:08.440027  597843 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:24:08.440166  597843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-017890
	I1219 03:24:08.440651  597843 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1219 03:24:08.440669  597843 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1219 03:24:08.440698  597843 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:24:08.440710  597843 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:24:08.440720  597843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-017890
	I1219 03:24:08.440769  597843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-017890
	I1219 03:24:08.441275  597843 addons.go:239] Setting addon default-storageclass=true in "newest-cni-017890"
	W1219 03:24:08.441297  597843 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:24:08.441324  597843 host.go:66] Checking if "newest-cni-017890" exists ...
	I1219 03:24:08.441824  597843 cli_runner.go:164] Run: docker container inspect newest-cni-017890 --format={{.State.Status}}
	I1219 03:24:08.475608  597843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/newest-cni-017890/id_rsa Username:docker}
	I1219 03:24:08.478067  597843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/newest-cni-017890/id_rsa Username:docker}
	I1219 03:24:08.478685  597843 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:24:08.478765  597843 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:24:08.478865  597843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-017890
	I1219 03:24:08.480899  597843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/newest-cni-017890/id_rsa Username:docker}
	I1219 03:24:08.504129  597843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/newest-cni-017890/id_rsa Username:docker}
	I1219 03:24:08.591723  597843 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:24:08.609742  597843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:24:08.613751  597843 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:24:08.613830  597843 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:24:08.614709  597843 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1219 03:24:08.614732  597843 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1219 03:24:08.615956  597843 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:24:08.629974  597843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:24:08.632026  597843 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1219 03:24:08.632052  597843 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1219 03:24:08.648116  597843 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:24:08.648143  597843 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1219 03:24:08.667278  597843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:24:10.885906  597843 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.276119079s)
	I1219 03:24:10.885953  597843 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.2721014s)
	I1219 03:24:10.885974  597843 api_server.go:72] duration metric: took 2.47539585s to wait for apiserver process to appear ...
	I1219 03:24:10.885982  597843 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:24:10.886000  597843 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1219 03:24:10.886017  597843 ssh_runner.go:235] Completed: test -f /usr/bin/helm: (2.270030396s)
	I1219 03:24:10.886076  597843 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.256071225s)
	I1219 03:24:10.886092  597843 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:24:10.895418  597843 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:24:10.895447  597843 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:24:10.896082  597843 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.228772183s)
	I1219 03:24:10.896112  597843 addons.go:500] Verifying addon metrics-server=true in "newest-cni-017890"
	I1219 03:24:10.896177  597843 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:24:10.896434  597843 cli_runner.go:164] Run: docker container inspect newest-cni-017890 --format={{.State.Status}}
	I1219 03:24:11.387109  597843 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1219 03:24:11.392750  597843 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:24:11.392777  597843 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:24:11.857689  597843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:24:11.886949  597843 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1219 03:24:11.891539  597843 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1219 03:24:11.892806  597843 api_server.go:141] control plane version: v1.35.0-rc.1
	I1219 03:24:11.892838  597843 api_server.go:131] duration metric: took 1.006848607s to wait for apiserver health ...
	I1219 03:24:11.892850  597843 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:24:11.896503  597843 system_pods.go:59] 9 kube-system pods found
	I1219 03:24:11.896560  597843 system_pods.go:61] "coredns-7d764666f9-77plh" [513e22e1-2bdb-4f22-ae50-4988898984ee] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1219 03:24:11.896603  597843 system_pods.go:61] "etcd-newest-cni-017890" [4b1f61d0-c938-4a8c-8a60-c0190ac59d3f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:24:11.896621  597843 system_pods.go:61] "kindnet-jptjc" [9cc93749-83ad-4d71-a1ca-7582fb36d8c0] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1219 03:24:11.896634  597843 system_pods.go:61] "kube-apiserver-newest-cni-017890" [f7690839-3434-49cd-ae45-a24dc25ef4f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:24:11.896645  597843 system_pods.go:61] "kube-controller-manager-newest-cni-017890" [50e7f6db-1f18-406d-b180-828907d447ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:24:11.896652  597843 system_pods.go:61] "kube-proxy-27f9l" [708d3c46-c794-4656-affd-5f14f12062b3] Running
	I1219 03:24:11.896670  597843 system_pods.go:61] "kube-scheduler-newest-cni-017890" [60d5eb7f-c30c-4924-aef0-92e778571bc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:24:11.896683  597843 system_pods.go:61] "metrics-server-5d785b57d4-qqtx5" [64c5d9c6-5ca0-41f1-8571-d7bcc7cf596f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1219 03:24:11.896695  597843 system_pods.go:61] "storage-provisioner" [7554cce6-ffd3-49e9-bb5d-841ea14e11c4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1219 03:24:11.896706  597843 system_pods.go:74] duration metric: took 3.847345ms to wait for pod list to return data ...
	I1219 03:24:11.896721  597843 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:24:11.899244  597843 default_sa.go:45] found service account: "default"
	I1219 03:24:11.899274  597843 default_sa.go:55] duration metric: took 2.538656ms for default service account to be created ...
	I1219 03:24:11.899289  597843 kubeadm.go:587] duration metric: took 3.48870959s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1219 03:24:11.899309  597843 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:24:11.901655  597843 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:24:11.901684  597843 node_conditions.go:123] node cpu capacity is 8
	I1219 03:24:11.901698  597843 node_conditions.go:105] duration metric: took 2.383054ms to run NodePressure ...
	I1219 03:24:11.901714  597843 start.go:242] waiting for startup goroutines ...
	I1219 03:24:14.918167  597843 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.060422646s)
	I1219 03:24:14.918274  597843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:24:15.111355  597843 addons.go:500] Verifying addon dashboard=true in "newest-cni-017890"
	I1219 03:24:15.111641  597843 cli_runner.go:164] Run: docker container inspect newest-cni-017890 --format={{.State.Status}}
	I1219 03:24:15.131003  597843 out.go:179] * Verifying dashboard addon...
	I1219 03:24:15.133142  597843 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:24:15.136271  597843 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:24:15.136289  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:15.636708  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:16.136217  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:16.636609  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:17.136118  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:17.636688  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:18.136697  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:18.636955  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:19.136327  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:19.636958  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:20.136611  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:20.635702  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:21.136714  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:21.637785  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:22.136226  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:22.636384  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:23.137131  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:23.636716  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:24.136918  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:24.636812  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:25.136713  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:25.636380  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:26.137306  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:26.637061  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:27.136404  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:27.637289  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:28.137036  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:28.636845  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:29.136941  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:29.636389  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:30.136680  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:30.637070  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:31.137104  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:31.636133  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:32.137116  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:32.636853  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:33.136443  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:33.637916  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:34.136816  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:34.636273  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:35.136706  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:35.636531  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:36.137637  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:36.637362  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:37.137059  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:37.636929  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:38.136195  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:38.637085  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:39.137374  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:39.637573  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:40.136963  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:40.637000  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:41.137191  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:41.637408  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:42.138089  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:42.636887  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:43.136768  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:43.637307  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:44.136868  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:44.636280  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:45.137216  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:45.637469  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:46.137951  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:46.638113  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:47.137452  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:47.638245  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:48.137481  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:48.637512  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:49.138424  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:49.637833  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:50.136515  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:50.637554  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:51.137416  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:51.637141  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:52.136976  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:52.636826  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:53.136387  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:53.637693  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:54.136719  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:54.637073  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:55.136721  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:55.636844  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:56.136161  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:56.637019  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:57.137210  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:57.637438  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:58.136982  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:58.636673  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:59.136739  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:59.636844  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:00.136445  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:00.637084  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:01.137429  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:01.637053  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:02.137302  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:02.637122  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:03.136980  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:03.637104  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:04.136665  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:04.636249  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:05.137120  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:05.637143  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:06.136858  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:06.636687  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:07.137550  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:07.637626  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:08.136466  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:08.636747  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:09.136344  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:09.637454  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:10.137285  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:10.636374  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:11.137391  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:11.637433  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:12.137711  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:12.636855  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:13.136825  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:13.637754  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:14.136333  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:14.636435  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:15.137225  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:15.638254  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:16.136303  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:16.637149  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:17.136452  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:17.637889  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:18.137092  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:18.637148  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:19.136614  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:19.637267  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:20.136982  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:20.636667  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:21.137862  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:21.636757  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:22.137106  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:22.637014  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:23.137497  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:23.638022  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:24.136372  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:24.636966  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:25.136133  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:25.636703  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:26.137357  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:26.637010  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:27.136149  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:27.637046  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:28.136940  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:28.636297  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:29.136898  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:29.637948  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:30.136225  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:30.636410  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:31.136857  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:31.636768  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:32.137112  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:32.637013  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:33.136801  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:33.637173  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:34.136985  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:34.637410  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:35.136679  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:35.637618  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:36.136542  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:36.636814  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:37.137275  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:37.636826  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:38.137234  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:38.636757  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:39.137687  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:39.636793  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:40.136406  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:40.636794  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:41.137255  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:41.637730  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:42.137440  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:42.637373  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:43.137017  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:43.636001  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:44.137453  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:44.636420  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:45.136827  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:45.637157  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:46.136834  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:46.636215  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:47.136987  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:47.636812  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:48.137673  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:48.637054  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:49.137764  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:49.637702  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:50.137443  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:50.637289  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:51.136306  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:51.637782  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:52.137532  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:52.637547  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:53.137622  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:53.637007  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:54.136819  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:54.637155  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:55.136635  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:55.636965  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:56.136783  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:56.636658  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:57.137690  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:57.637200  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:58.137067  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:58.636946  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:59.136283  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:59.636323  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:00.137300  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:00.636405  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:01.136751  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:01.637661  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:02.137235  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:02.636716  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:03.137766  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:03.637298  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:04.137292  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:04.636635  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:05.137202  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:05.636930  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:06.136532  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:06.637151  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:07.136713  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:07.637307  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:08.137049  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:08.636569  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:09.136927  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:09.636099  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:10.137528  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:10.636826  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:11.136021  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:11.636945  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:12.136639  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:12.637156  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:13.137203  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:13.636831  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:14.137420  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:14.636800  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:15.137333  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:15.636779  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:16.136736  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:16.637704  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:17.137952  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:17.636997  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:18.136597  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:18.637167  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:19.137123  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:19.637021  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:20.136485  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:20.636973  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:21.137144  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:21.636913  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:22.136762  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:22.637427  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:23.136986  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:23.637032  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:24.137183  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:24.636455  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:25.137004  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:25.636380  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:26.137201  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:26.636875  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:27.137320  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:27.636651  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:28.137532  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:28.637210  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:29.136619  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:29.636965  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:30.136354  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:30.637248  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:31.136725  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:31.637694  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:32.137396  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:32.636869  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:33.137085  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:33.636846  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:34.136925  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:34.637345  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:35.136941  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:35.636442  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:36.137281  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:36.636711  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:37.137569  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:37.637035  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:38.136758  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:38.637036  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:39.136256  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:39.636846  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:40.136989  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:40.637225  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:41.136541  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:41.637789  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:42.137527  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:42.636993  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:43.136650  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:43.637606  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:44.137292  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:44.636780  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:45.136082  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:45.636481  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:46.137153  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:46.636938  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:47.136981  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:47.636554  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:48.137071  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:48.636436  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:49.137188  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:49.636895  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:50.136962  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:50.636858  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:51.136241  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:51.636827  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:52.136158  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:52.636904  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:53.136417  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:53.636854  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:54.136514  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:54.637210  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:55.136962  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:55.636541  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:56.137674  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:56.637087  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:57.137027  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:57.636944  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:58.137419  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:58.636918  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:59.136620  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:59.636357  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:00.138126  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:00.636843  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:01.137033  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:01.637448  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:02.137774  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:02.636446  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:03.137082  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:03.636266  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:04.137272  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:04.637739  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:05.136651  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:05.636906  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:06.137074  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:06.636350  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:07.137001  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:07.636778  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:08.136820  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:08.636343  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:09.136822  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:09.636338  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:10.137325  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:10.637072  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:11.136792  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:11.636945  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:12.137216  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:12.637335  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:13.137396  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:13.637294  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:14.137451  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:14.637305  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:15.137060  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:15.636411  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:16.137794  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:16.636919  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:17.137161  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:17.636895  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:18.136747  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:18.636793  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:19.136676  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:19.636489  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:20.137887  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:20.636726  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:21.136849  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:21.637010  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:22.137478  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:22.636959  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:23.136870  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:23.636332  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:24.137333  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:24.637185  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:25.137094  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:25.636763  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:26.136851  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:26.636573  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:27.136382  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:27.637280  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:28.137457  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:28.637119  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:29.137389  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:29.637135  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:30.137267  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:30.637020  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:31.136931  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:31.636968  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:32.136865  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:32.636412  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:33.137169  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:33.636666  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:34.137157  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:34.637257  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:35.137335  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:35.637047  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:36.137320  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:36.637022  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:37.136993  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:37.636640  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:38.137710  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:38.637233  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:39.137559  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:39.636774  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:40.137130  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:40.636308  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:41.137351  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:41.637374  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:42.137415  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:42.637112  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:43.136721  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:43.636004  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:44.136561  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:44.637219  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:45.137496  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:45.637023  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:46.137155  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:46.636758  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:47.136317  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:47.636753  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:48.136532  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:48.637519  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:49.137077  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:49.636614  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:50.137114  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:50.637525  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:51.137508  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:51.637298  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:52.136872  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:52.637272  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:53.137015  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:53.636431  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:54.137416  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:54.637035  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:55.137110  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:55.637210  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:56.137057  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:56.637186  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:57.136809  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:57.636264  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:58.137183  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:58.637064  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:59.137021  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:59.636373  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:00.137042  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:00.637037  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:01.136739  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:01.637506  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:02.137594  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:02.636621  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:03.136048  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:03.636648  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:04.137539  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:04.638195  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:05.137062  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:05.636692  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:06.136705  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:06.637246  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:07.136664  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:07.636270  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:08.137034  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:08.636993  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:09.136663  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:09.636192  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:10.138561  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:10.637014  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:11.136201  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:11.636672  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:12.136307  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:12.637388  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:13.137402  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:13.636976  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:14.137200  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:14.637283  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:15.137462  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:15.636529  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:16.136520  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:16.637243  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:17.136999  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:17.636376  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:18.136827  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:18.636466  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:19.137540  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:19.637217  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:20.136679  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:20.636967  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:21.137000  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:21.636882  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:22.137050  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:22.637396  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:23.136879  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:23.636765  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:24.136452  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:24.637460  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:25.137496  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:25.637978  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:26.136761  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:26.636455  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:27.137433  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:27.636948  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:28.136898  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:28.636542  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:29.137099  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:29.636660  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:30.136900  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:30.637057  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:31.137325  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:31.636916  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:32.137535  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:32.637686  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:33.136638  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:33.636265  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:34.136670  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:34.636540  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:35.137260  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:35.637367  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:36.136933  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:36.637243  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:37.137848  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:37.636874  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:38.136665  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:38.638112  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:39.136729  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:39.636438  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:40.136638  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:40.636163  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:41.137103  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:41.636882  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:42.136659  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:42.636677  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:43.136878  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:43.636711  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:44.136472  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:44.637566  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:45.136801  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:45.636129  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:46.137204  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:46.637261  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:47.137240  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:47.636908  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:48.136331  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:48.637640  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:49.136687  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:49.636242  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:50.136751  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:50.636957  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:51.136514  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:51.637471  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:52.137275  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:52.637302  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:53.136388  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:53.637046  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:54.137206  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:54.637225  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:55.136896  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:55.636482  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:56.138151  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                   ATTEMPT             POD ID              POD                                                     NAMESPACE
	bbf97a8974843       6e38f40d628db       23 minutes ago      Running             storage-provisioner                    2                   78e1c92e493dc       storage-provisioner                                     kube-system
	a617f907f5976       3a975970da2f5       23 minutes ago      Running             proxy                                  0                   42350c56c1ed2       kubernetes-dashboard-kong-9849c64bd-xp7zj               kubernetes-dashboard
	ba8583d499dab       3a975970da2f5       23 minutes ago      Exited              clear-stale-pid                        0                   42350c56c1ed2       kubernetes-dashboard-kong-9849c64bd-xp7zj               kubernetes-dashboard
	6d2c834ee3967       dd54374d0ab14       24 minutes ago      Running             kubernetes-dashboard-auth              0                   bbd45612817e1       kubernetes-dashboard-auth-557d9fbf7b-86ldt              kubernetes-dashboard
	b5a0ba5562bdd       d9cbc9f4053ca       24 minutes ago      Running             kubernetes-dashboard-metrics-scraper   0                   a41cf6e9e0d81       kubernetes-dashboard-metrics-scraper-7685fd8b77-9nkzr   kubernetes-dashboard
	88391b53389ad       4921d7a6dffa9       24 minutes ago      Running             kindnet-cni                            1                   0e15be7bc9aeb       kindnet-kzlhv                                           kube-system
	d98f6ee8737a4       52546a367cc9e       24 minutes ago      Running             coredns                                1                   d90c48f4e5290       coredns-66bc5c9577-qmb9z                                kube-system
	1ce22fc8ed5c2       56cc512116c8f       24 minutes ago      Running             busybox                                1                   3dfc57a649643       busybox                                                 default
	c081d39cdf580       6e38f40d628db       24 minutes ago      Exited              storage-provisioner                    1                   78e1c92e493dc       storage-provisioner                                     kube-system
	23500931d7544       36eef8e07bdd6       24 minutes ago      Running             kube-proxy                             1                   559f33ba97424       kube-proxy-qhlhx                                        kube-system
	2a322cf835b3b       aec12dadf56dd       24 minutes ago      Running             kube-scheduler                         1                   ce16640408b00       kube-scheduler-embed-certs-536489                       kube-system
	5cc5c75096006       a3e246e9556e9       24 minutes ago      Running             etcd                                   1                   cd1766a84546d       etcd-embed-certs-536489                                 kube-system
	ce0c57d49301f       5826b25d990d7       24 minutes ago      Running             kube-controller-manager                1                   c167fe9f101c1       kube-controller-manager-embed-certs-536489              kube-system
	adaab08cca65a       aa27095f56193       24 minutes ago      Running             kube-apiserver                         1                   fd4903f32446c       kube-apiserver-embed-certs-536489                       kube-system
	5e05b1748ea17       56cc512116c8f       24 minutes ago      Exited              busybox                                0                   638d78702ad99       busybox                                                 default
	49d80e3460629       52546a367cc9e       24 minutes ago      Exited              coredns                                0                   799f22c98e05d       coredns-66bc5c9577-qmb9z                                kube-system
	6ef3449d1c944       4921d7a6dffa9       24 minutes ago      Exited              kindnet-cni                            0                   4857662120796       kindnet-kzlhv                                           kube-system
	c5a983f195e2d       36eef8e07bdd6       25 minutes ago      Exited              kube-proxy                             0                   4a28fb48631fb       kube-proxy-qhlhx                                        kube-system
	694f1a505c59b       aec12dadf56dd       25 minutes ago      Exited              kube-scheduler                         0                   8c22c22f26aa9       kube-scheduler-embed-certs-536489                       kube-system
	950bbd91cdf6c       a3e246e9556e9       25 minutes ago      Exited              etcd                                   0                   2a04dd26f6f99       etcd-embed-certs-536489                                 kube-system
	297af3c7b709c       5826b25d990d7       25 minutes ago      Exited              kube-controller-manager                0                   da3910955a305       kube-controller-manager-embed-certs-536489              kube-system
	a468570a2a402       aa27095f56193       25 minutes ago      Exited              kube-apiserver                         0                   46c947f55e62d       kube-apiserver-embed-certs-536489                       kube-system
	
	
	==> containerd <==
	Dec 19 03:28:40 embed-certs-536489 containerd[454]: time="2025-12-19T03:28:40.273552591Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18df65878475717a15ee1c1c40abb4c3.slice/cri-containerd-5cc5c7509600612720d7150e46d17f2dc74595a122185f7fb165025fc8591db9.scope/hugetlb.1GB.events\""
	Dec 19 03:28:50 embed-certs-536489 containerd[454]: time="2025-12-19T03:28:50.288692824Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e13be395cb122823ccdc93143fde460.slice/cri-containerd-ce0c57d49301fbc43c6109ed0626ba83c88d5ea4449d0311222becbdd2ca7f9d.scope/hugetlb.2MB.events\""
	Dec 19 03:28:50 embed-certs-536489 containerd[454]: time="2025-12-19T03:28:50.288837709Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e13be395cb122823ccdc93143fde460.slice/cri-containerd-ce0c57d49301fbc43c6109ed0626ba83c88d5ea4449d0311222becbdd2ca7f9d.scope/hugetlb.1GB.events\""
	Dec 19 03:28:50 embed-certs-536489 containerd[454]: time="2025-12-19T03:28:50.289653530Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc7f26c2_aed8_4540_bd1f_0ee0b1974137.slice/cri-containerd-23500931d75449b8160f4e7f201ba70585ca8e2905c5d317c448baea0681d8a3.scope/hugetlb.2MB.events\""
	Dec 19 03:28:50 embed-certs-536489 containerd[454]: time="2025-12-19T03:28:50.289753096Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc7f26c2_aed8_4540_bd1f_0ee0b1974137.slice/cri-containerd-23500931d75449b8160f4e7f201ba70585ca8e2905c5d317c448baea0681d8a3.scope/hugetlb.1GB.events\""
	Dec 19 03:28:50 embed-certs-536489 containerd[454]: time="2025-12-19T03:28:50.290431141Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51c90b41_88a3_4279_84d8_13a52b7ef246.slice/cri-containerd-bbf97a8974843a36509fb3ed1c0f5f2bf65466a551be783aacb684ee93acde81.scope/hugetlb.2MB.events\""
	Dec 19 03:28:50 embed-certs-536489 containerd[454]: time="2025-12-19T03:28:50.290528806Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51c90b41_88a3_4279_84d8_13a52b7ef246.slice/cri-containerd-bbf97a8974843a36509fb3ed1c0f5f2bf65466a551be783aacb684ee93acde81.scope/hugetlb.1GB.events\""
	Dec 19 03:28:50 embed-certs-536489 containerd[454]: time="2025-12-19T03:28:50.291364842Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17c0a13ba859c4767a18964a450bf36c.slice/cri-containerd-2a322cf835b3b23b782536fe76f4ad8a12410fd3403278fc6b474579b71645a9.scope/hugetlb.2MB.events\""
	Dec 19 03:28:50 embed-certs-536489 containerd[454]: time="2025-12-19T03:28:50.291476513Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17c0a13ba859c4767a18964a450bf36c.slice/cri-containerd-2a322cf835b3b23b782536fe76f4ad8a12410fd3403278fc6b474579b71645a9.scope/hugetlb.1GB.events\""
	Dec 19 03:28:50 embed-certs-536489 containerd[454]: time="2025-12-19T03:28:50.292312876Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18df65878475717a15ee1c1c40abb4c3.slice/cri-containerd-5cc5c7509600612720d7150e46d17f2dc74595a122185f7fb165025fc8591db9.scope/hugetlb.2MB.events\""
	Dec 19 03:28:50 embed-certs-536489 containerd[454]: time="2025-12-19T03:28:50.292437968Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18df65878475717a15ee1c1c40abb4c3.slice/cri-containerd-5cc5c7509600612720d7150e46d17f2dc74595a122185f7fb165025fc8591db9.scope/hugetlb.1GB.events\""
	Dec 19 03:28:50 embed-certs-536489 containerd[454]: time="2025-12-19T03:28:50.293185855Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79b035a8_ae4b_4a1f_8458_4fe0f7d4ebef.slice/cri-containerd-1ce22fc8ed5c244e3041371fb5e196f2c8c0390b5ef45df170040970b4b6679d.scope/hugetlb.2MB.events\""
	Dec 19 03:28:50 embed-certs-536489 containerd[454]: time="2025-12-19T03:28:50.293289183Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79b035a8_ae4b_4a1f_8458_4fe0f7d4ebef.slice/cri-containerd-1ce22fc8ed5c244e3041371fb5e196f2c8c0390b5ef45df170040970b4b6679d.scope/hugetlb.1GB.events\""
	Dec 19 03:28:50 embed-certs-536489 containerd[454]: time="2025-12-19T03:28:50.294295977Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd0dceb8_d48d_4215_82f5_df001a8ffe5f.slice/cri-containerd-d98f6ee8737a4e1a9384d4dd7481c98610b0e878fad1eb0d13f725c032eb8a18.scope/hugetlb.2MB.events\""
	Dec 19 03:28:50 embed-certs-536489 containerd[454]: time="2025-12-19T03:28:50.294429673Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd0dceb8_d48d_4215_82f5_df001a8ffe5f.slice/cri-containerd-d98f6ee8737a4e1a9384d4dd7481c98610b0e878fad1eb0d13f725c032eb8a18.scope/hugetlb.1GB.events\""
	Dec 19 03:28:50 embed-certs-536489 containerd[454]: time="2025-12-19T03:28:50.295338597Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf75eb7b001775228e6f7e4cc959d9647.slice/cri-containerd-adaab08cca65aed7fec4b0bd60b2a396ef69b8752557464785ac247047a4b62d.scope/hugetlb.2MB.events\""
	Dec 19 03:28:50 embed-certs-536489 containerd[454]: time="2025-12-19T03:28:50.295463784Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf75eb7b001775228e6f7e4cc959d9647.slice/cri-containerd-adaab08cca65aed7fec4b0bd60b2a396ef69b8752557464785ac247047a4b62d.scope/hugetlb.1GB.events\""
	Dec 19 03:28:50 embed-certs-536489 containerd[454]: time="2025-12-19T03:28:50.296362612Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-pod2a4d0c65_8aff_4b2f_bb3d_d79b89f560ca.slice/cri-containerd-88391b53389ad6316dd5f150ef995160d26338b062497d4ce692afa61fc149e0.scope/hugetlb.2MB.events\""
	Dec 19 03:28:50 embed-certs-536489 containerd[454]: time="2025-12-19T03:28:50.296459137Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-pod2a4d0c65_8aff_4b2f_bb3d_d79b89f560ca.slice/cri-containerd-88391b53389ad6316dd5f150ef995160d26338b062497d4ce692afa61fc149e0.scope/hugetlb.1GB.events\""
	Dec 19 03:28:50 embed-certs-536489 containerd[454]: time="2025-12-19T03:28:50.297246250Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ab8b054_e2df_4a75_92bf_a7df63248b7a.slice/cri-containerd-b5a0ba5562bdd756f5240f2406266343eb667e23534e1a7d78549343008ecfaf.scope/hugetlb.2MB.events\""
	Dec 19 03:28:50 embed-certs-536489 containerd[454]: time="2025-12-19T03:28:50.297337631Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ab8b054_e2df_4a75_92bf_a7df63248b7a.slice/cri-containerd-b5a0ba5562bdd756f5240f2406266343eb667e23534e1a7d78549343008ecfaf.scope/hugetlb.1GB.events\""
	Dec 19 03:28:50 embed-certs-536489 containerd[454]: time="2025-12-19T03:28:50.298126828Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0be7aed4_50f4_44e9_a4bb_985684a728ad.slice/cri-containerd-6d2c834ee3967fcf1a232292c437f189b9a9603e0cf2d152b90241728f99395b.scope/hugetlb.2MB.events\""
	Dec 19 03:28:50 embed-certs-536489 containerd[454]: time="2025-12-19T03:28:50.298236183Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0be7aed4_50f4_44e9_a4bb_985684a728ad.slice/cri-containerd-6d2c834ee3967fcf1a232292c437f189b9a9603e0cf2d152b90241728f99395b.scope/hugetlb.1GB.events\""
	Dec 19 03:28:50 embed-certs-536489 containerd[454]: time="2025-12-19T03:28:50.299029796Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d84d601_10a6_4610_8b50_1794a35691db.slice/cri-containerd-a617f907f59764ef3a36004353cc0c58bffeab607edebad8cd81a81cf8b6ff18.scope/hugetlb.2MB.events\""
	Dec 19 03:28:50 embed-certs-536489 containerd[454]: time="2025-12-19T03:28:50.299134904Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d84d601_10a6_4610_8b50_1794a35691db.slice/cri-containerd-a617f907f59764ef3a36004353cc0c58bffeab607edebad8cd81a81cf8b6ff18.scope/hugetlb.1GB.events\""
	
	
	==> coredns [49d80e3460629230bed0177dda30379e478025d61a9337de8415aced6692f0c5] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41424 - 61485 "HINFO IN 1045963530138923230.8580688753000702100. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.052006531s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d98f6ee8737a4e1a9384d4dd7481c98610b0e878fad1eb0d13f725c032eb8a18] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39202 - 63367 "HINFO IN 6663590121657938747.4186564428347586509. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.025882834s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-536489
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-536489
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=embed-certs-536489
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T03_03_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 03:03:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-536489
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 03:28:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 03:24:42 +0000   Fri, 19 Dec 2025 03:03:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 03:24:42 +0000   Fri, 19 Dec 2025 03:03:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 03:24:42 +0000   Fri, 19 Dec 2025 03:03:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 03:24:42 +0000   Fri, 19 Dec 2025 03:04:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-536489
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                1ec4ce9e-9f47-460c-8f5f-4dbae0818e5d
	  Boot ID:                    a0dec9bb-d63c-4dc5-9036-bbcaf9f2c6be
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.2.0
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 coredns-66bc5c9577-qmb9z                                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25m
	  kube-system                 etcd-embed-certs-536489                                  100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         25m
	  kube-system                 kindnet-kzlhv                                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25m
	  kube-system                 kube-apiserver-embed-certs-536489                        250m (3%)     0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 kube-controller-manager-embed-certs-536489               200m (2%)     0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 kube-proxy-qhlhx                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 kube-scheduler-embed-certs-536489                        100m (1%)     0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 metrics-server-746fcd58dc-8458x                          100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         24m
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         25m
	  kubernetes-dashboard        kubernetes-dashboard-api-f5b56d7b9-zkjk8                 100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     24m
	  kubernetes-dashboard        kubernetes-dashboard-auth-557d9fbf7b-86ldt               100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     24m
	  kubernetes-dashboard        kubernetes-dashboard-kong-9849c64bd-xp7zj                0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-7685fd8b77-9nkzr    100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     24m
	  kubernetes-dashboard        kubernetes-dashboard-web-5c9f966b98-x8z8r                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (16%)  1100m (13%)
	  memory             1220Mi (3%)  1820Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 25m                kube-proxy       
	  Normal  Starting                 24m                kube-proxy       
	  Normal  Starting                 25m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     25m                kubelet          Node embed-certs-536489 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  25m                kubelet          Node embed-certs-536489 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25m                kubelet          Node embed-certs-536489 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  25m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           25m                node-controller  Node embed-certs-536489 event: Registered Node embed-certs-536489 in Controller
	  Normal  NodeReady                24m                kubelet          Node embed-certs-536489 status is now: NodeReady
	  Normal  Starting                 24m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24m (x8 over 24m)  kubelet          Node embed-certs-536489 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24m (x8 over 24m)  kubelet          Node embed-certs-536489 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24m (x7 over 24m)  kubelet          Node embed-certs-536489 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           24m                node-controller  Node embed-certs-536489 event: Registered Node embed-certs-536489 in Controller
	
	
	==> dmesg <==
	[Dec19 01:17] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001886] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.395482] i8042: Warning: Keylock active
	[  +0.012710] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497460] block sda: the capability attribute has been deprecated.
	[  +0.080392] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.020963] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.499240] kauditd_printk_skb: 47 callbacks suppressed
	[Dec19 03:03] overlayfs: failed to resolve '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/29/fs': -2
	
	
	==> etcd [5cc5c7509600612720d7150e46d17f2dc74595a122185f7fb165025fc8591db9] <==
	{"level":"warn","ts":"2025-12-19T03:04:48.925908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:48.934366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:48.993787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:52.874145Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:52.898623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:52.925283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:52.939330Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:52.966946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:53.018405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:53.030527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:53.047519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:53.060496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:53.089715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35188","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-19T03:05:07.803872Z","caller":"traceutil/trace.go:172","msg":"trace[1867073184] transaction","detail":"{read_only:false; response_revision:751; number_of_response:1; }","duration":"138.584763ms","start":"2025-12-19T03:05:07.665266Z","end":"2025-12-19T03:05:07.803850Z","steps":["trace[1867073184] 'process raft request'  (duration: 138.402929ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:05:07.995500Z","caller":"traceutil/trace.go:172","msg":"trace[27340533] transaction","detail":"{read_only:false; response_revision:752; number_of_response:1; }","duration":"185.133001ms","start":"2025-12-19T03:05:07.810344Z","end":"2025-12-19T03:05:07.995477Z","steps":["trace[27340533] 'process raft request'  (duration: 104.597246ms)","trace[27340533] 'compare'  (duration: 80.3333ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-19T03:05:07.995687Z","caller":"traceutil/trace.go:172","msg":"trace[843511351] transaction","detail":"{read_only:false; response_revision:753; number_of_response:1; }","duration":"185.056756ms","start":"2025-12-19T03:05:07.810614Z","end":"2025-12-19T03:05:07.995671Z","steps":["trace[843511351] 'process raft request'  (duration: 184.803425ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:14:48.371167Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1148}
	{"level":"info","ts":"2025-12-19T03:14:48.391919Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1148,"took":"20.269493ms","hash":742480838,"current-db-size-bytes":4325376,"current-db-size":"4.3 MB","current-db-size-in-use-bytes":1851392,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2025-12-19T03:14:48.392051Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":742480838,"revision":1148,"compact-revision":-1}
	{"level":"info","ts":"2025-12-19T03:19:48.375346Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1408}
	{"level":"info","ts":"2025-12-19T03:19:48.378040Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1408,"took":"2.28429ms","hash":877099031,"current-db-size-bytes":4325376,"current-db-size":"4.3 MB","current-db-size-in-use-bytes":2154496,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2025-12-19T03:19:48.378084Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":877099031,"revision":1408,"compact-revision":1148}
	{"level":"info","ts":"2025-12-19T03:24:48.380883Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1662}
	{"level":"info","ts":"2025-12-19T03:24:48.384231Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1662,"took":"2.929472ms","hash":2804805001,"current-db-size-bytes":4325376,"current-db-size":"4.3 MB","current-db-size-in-use-bytes":2129920,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2025-12-19T03:24:48.384280Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2804805001,"revision":1662,"compact-revision":1408}
	
	
	==> etcd [950bbd91cdf6c6f6ba6481d495ec450f0338291700356cd321b1785e71f85ce9] <==
	{"level":"warn","ts":"2025-12-19T03:03:49.550065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:03:49.567028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:03:49.574515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:03:49.584857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:03:49.593473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:03:49.600846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:03:49.609207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:03:49.617661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:03:49.625168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:03:49.633090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:03:49.642740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:03:49.650275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:03:49.657708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:03:49.664694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:03:49.672975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:03:49.680435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:03:49.695654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:03:49.702627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:03:49.714083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:03:49.723066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:03:49.731127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:03:49.751813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:03:49.759831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:03:49.768680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:03:49.833814Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58604","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 03:29:00 up  2:11,  0 user,  load average: 0.88, 0.76, 2.47
	Linux embed-certs-536489 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6ef3449d1c94453317ecbd3daf843f6ee0146bff95f5ea5fb15561ddd656b76e] <==
	I1219 03:04:03.286512       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1219 03:04:03.286802       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1219 03:04:03.286961       1 main.go:148] setting mtu 1500 for CNI 
	I1219 03:04:03.286988       1 main.go:178] kindnetd IP family: "ipv4"
	I1219 03:04:03.287013       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-19T03:04:03Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1219 03:04:03.584219       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1219 03:04:03.584267       1 controller.go:381] "Waiting for informer caches to sync"
	I1219 03:04:03.584281       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1219 03:04:03.584476       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1219 03:04:04.084669       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1219 03:04:04.084709       1 metrics.go:72] Registering metrics
	I1219 03:04:04.084861       1 controller.go:711] "Syncing nftables rules"
	I1219 03:04:13.585471       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:04:13.585547       1 main.go:301] handling current node
	I1219 03:04:23.584909       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:04:23.584968       1 main.go:301] handling current node
	
	
	==> kindnet [88391b53389ad6316dd5f150ef995160d26338b062497d4ce692afa61fc149e0] <==
	I1219 03:26:52.264625       1 main.go:301] handling current node
	I1219 03:27:02.263566       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:27:02.263641       1 main.go:301] handling current node
	I1219 03:27:12.263563       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:27:12.263671       1 main.go:301] handling current node
	I1219 03:27:22.263964       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:27:22.264000       1 main.go:301] handling current node
	I1219 03:27:32.264235       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:27:32.264284       1 main.go:301] handling current node
	I1219 03:27:42.264242       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:27:42.264279       1 main.go:301] handling current node
	I1219 03:27:52.264210       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:27:52.264256       1 main.go:301] handling current node
	I1219 03:28:02.263875       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:28:02.263924       1 main.go:301] handling current node
	I1219 03:28:12.264226       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:28:12.264257       1 main.go:301] handling current node
	I1219 03:28:22.264190       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:28:22.264229       1 main.go:301] handling current node
	I1219 03:28:32.264050       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:28:32.264083       1 main.go:301] handling current node
	I1219 03:28:42.264547       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:28:42.264615       1 main.go:301] handling current node
	I1219 03:28:52.263613       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:28:52.263650       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a468570a2a402d6b3627a5f34baf561fb32f32353eae4b100a0e832cfda659f4] <==
	I1219 03:03:53.080830       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1219 03:03:53.119119       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1219 03:03:57.551348       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1219 03:03:57.556886       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1219 03:03:58.149856       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1219 03:03:58.349463       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1219 03:04:26.563944       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:36676: use of closed network connection
	I1219 03:04:27.245043       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1219 03:04:27.248999       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:04:27.249065       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1219 03:04:27.249119       1 handler_proxy.go:143] error resolving kube-system/metrics-server: service "metrics-server" not found
	I1219 03:04:27.321637       1 alloc.go:328] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.104.75.28"}
	W1219 03:04:27.328760       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:04:27.328816       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1219 03:04:27.334393       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:04:27.334442       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	
	
	==> kube-apiserver [adaab08cca65aed7fec4b0bd60b2a396ef69b8752557464785ac247047a4b62d] <==
	E1219 03:24:50.599904       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1219 03:24:50.599917       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1219 03:24:50.599928       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1219 03:24:50.601057       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:25:50.600310       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:25:50.600383       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1219 03:25:50.600400       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:25:50.601454       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:25:50.601526       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1219 03:25:50.601544       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:27:50.601330       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:27:50.601395       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1219 03:27:50.601412       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:27:50.602468       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:27:50.602559       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1219 03:27:50.602615       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [297af3c7b709c7627a4e3547ff621bc84bc949e990ed9529d2de353cd97067ba] <==
	I1219 03:03:57.379340       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1219 03:03:57.394447       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1219 03:03:57.394515       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1219 03:03:57.394572       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 03:03:57.394616       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1219 03:03:57.394624       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1219 03:03:57.394934       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1219 03:03:57.395929       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1219 03:03:57.395994       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1219 03:03:57.396026       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1219 03:03:57.396084       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1219 03:03:57.396169       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1219 03:03:57.396328       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1219 03:03:57.396811       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1219 03:03:57.398567       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1219 03:03:57.399867       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1219 03:03:57.401002       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1219 03:03:57.402356       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 03:03:57.405538       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 03:03:57.407743       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1219 03:03:57.422148       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1219 03:03:57.441008       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 03:04:17.352290       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1219 03:04:27.407214       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:04:27.448854       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-controller-manager [ce0c57d49301fbc43c6109ed0626ba83c88d5ea4449d0311222becbdd2ca7f9d] <==
	I1219 03:22:54.482865       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:23:24.345932       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:23:24.489853       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:23:54.351287       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:23:54.497687       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:24:24.356303       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:24:24.505049       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:24:54.362143       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:24:54.512754       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:25:24.367646       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:25:24.520294       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:25:54.372757       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:25:54.527685       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:26:24.377414       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:26:24.535750       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:26:54.382856       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:26:54.544229       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:27:24.387978       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:27:24.552619       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:27:54.393772       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:27:54.561322       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:28:24.398770       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:28:24.569382       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:28:54.402905       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:28:54.577711       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [23500931d75449b8160f4e7f201ba70585ca8e2905c5d317c448baea0681d8a3] <==
	I1219 03:04:51.445944       1 server_linux.go:53] "Using iptables proxy"
	I1219 03:04:51.552010       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1219 03:04:51.653481       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1219 03:04:51.653875       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1219 03:04:51.653988       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 03:04:51.734710       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1219 03:04:51.734784       1 server_linux.go:132] "Using iptables Proxier"
	I1219 03:04:51.744382       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:04:51.745692       1 server.go:527] "Version info" version="v1.34.3"
	I1219 03:04:51.745799       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:04:51.748477       1 config.go:200] "Starting service config controller"
	I1219 03:04:51.748568       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:04:51.748780       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:04:51.748857       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:04:51.749372       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:04:51.750062       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:04:51.750837       1 config.go:309] "Starting node config controller"
	I1219 03:04:51.750861       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:04:51.750869       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 03:04:51.849778       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1219 03:04:51.850063       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 03:04:51.850193       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [c5a983f195e2d026daabd22368b51353c4fcb3399dbe02f13f628d6de0e5afbc] <==
	I1219 03:03:59.612758       1 server_linux.go:53] "Using iptables proxy"
	I1219 03:03:59.697569       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1219 03:03:59.798640       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1219 03:03:59.798691       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1219 03:03:59.798877       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 03:03:59.832040       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1219 03:03:59.832120       1 server_linux.go:132] "Using iptables Proxier"
	I1219 03:03:59.840790       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:03:59.841566       1 server.go:527] "Version info" version="v1.34.3"
	I1219 03:03:59.842147       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:03:59.845715       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:03:59.846440       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:03:59.846758       1 config.go:309] "Starting node config controller"
	I1219 03:03:59.849072       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:03:59.849083       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 03:03:59.847258       1 config.go:200] "Starting service config controller"
	I1219 03:03:59.849092       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:03:59.847253       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:03:59.849106       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:03:59.949210       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 03:03:59.949259       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1219 03:03:59.949270       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [2a322cf835b3b23b782536fe76f4ad8a12410fd3403278fc6b474579b71645a9] <==
	I1219 03:04:47.795634       1 serving.go:386] Generated self-signed cert in-memory
	W1219 03:04:49.493848       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1219 03:04:49.495162       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1219 03:04:49.495189       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1219 03:04:49.495211       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1219 03:04:49.578289       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1219 03:04:49.578331       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:04:49.600751       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:04:49.601169       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:04:49.602617       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1219 03:04:49.602945       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1219 03:04:49.702362       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [694f1a505c59b9aa3ec96e13e5fad88123c4ea35ca018ff6aa1259950243c9a4] <==
	E1219 03:03:50.412171       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1219 03:03:50.412533       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1219 03:03:50.412546       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1219 03:03:50.412314       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1219 03:03:50.412426       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1219 03:03:50.412415       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1219 03:03:50.412500       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1219 03:03:50.412246       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1219 03:03:50.412361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1219 03:03:50.412696       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1219 03:03:50.412781       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1219 03:03:50.412745       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1219 03:03:51.239323       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1219 03:03:51.282101       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1219 03:03:51.285301       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1219 03:03:51.295328       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1219 03:03:51.447520       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1219 03:03:51.508832       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1219 03:03:51.605064       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1219 03:03:51.614764       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1219 03:03:51.659622       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1219 03:03:51.672097       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1219 03:03:51.691733       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1219 03:03:51.698794       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1219 03:03:53.709139       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 19 03:27:06 embed-certs-536489 kubelet[592]: E1219 03:27:06.368453     592 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-web\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-web/manifests/sha256:cc7c31bd2d8470e3590dcb20fe980769b43054b31a5c5c0da606e9add898d85d: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-x8z8r" podUID="d61e9896-5a56-490e-b97a-634b1c427ce2"
	Dec 19 03:27:15 embed-certs-536489 kubelet[592]: E1219 03:27:15.368159     592 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-api\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-api/manifests/sha256:96a702cfd3399d9eba23b3d37b09f798a4f51fcd8c8dfa8552c7829ade9c4aff: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-api-f5b56d7b9-zkjk8" podUID="be638e12-bf07-4abc-af3d-e23927785cea"
	Dec 19 03:27:15 embed-certs-536489 kubelet[592]: E1219 03:27:15.368220     592 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-8458x" podUID="47114157-df98-40be-815f-7437499ca215"
	Dec 19 03:27:17 embed-certs-536489 kubelet[592]: E1219 03:27:17.368513     592 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-web\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-web/manifests/sha256:cc7c31bd2d8470e3590dcb20fe980769b43054b31a5c5c0da606e9add898d85d: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-x8z8r" podUID="d61e9896-5a56-490e-b97a-634b1c427ce2"
	Dec 19 03:27:29 embed-certs-536489 kubelet[592]: E1219 03:27:29.367985     592 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-8458x" podUID="47114157-df98-40be-815f-7437499ca215"
	Dec 19 03:27:30 embed-certs-536489 kubelet[592]: E1219 03:27:30.371401     592 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-api\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-api/manifests/sha256:96a702cfd3399d9eba23b3d37b09f798a4f51fcd8c8dfa8552c7829ade9c4aff: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-api-f5b56d7b9-zkjk8" podUID="be638e12-bf07-4abc-af3d-e23927785cea"
	Dec 19 03:27:31 embed-certs-536489 kubelet[592]: E1219 03:27:31.368361     592 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-web\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-web/manifests/sha256:cc7c31bd2d8470e3590dcb20fe980769b43054b31a5c5c0da606e9add898d85d: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-x8z8r" podUID="d61e9896-5a56-490e-b97a-634b1c427ce2"
	Dec 19 03:27:44 embed-certs-536489 kubelet[592]: E1219 03:27:44.367917     592 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-8458x" podUID="47114157-df98-40be-815f-7437499ca215"
	Dec 19 03:27:44 embed-certs-536489 kubelet[592]: E1219 03:27:44.367923     592 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-web\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-web/manifests/sha256:cc7c31bd2d8470e3590dcb20fe980769b43054b31a5c5c0da606e9add898d85d: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-x8z8r" podUID="d61e9896-5a56-490e-b97a-634b1c427ce2"
	Dec 19 03:27:45 embed-certs-536489 kubelet[592]: E1219 03:27:45.368333     592 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-api\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-api/manifests/sha256:96a702cfd3399d9eba23b3d37b09f798a4f51fcd8c8dfa8552c7829ade9c4aff: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-api-f5b56d7b9-zkjk8" podUID="be638e12-bf07-4abc-af3d-e23927785cea"
	Dec 19 03:27:58 embed-certs-536489 kubelet[592]: E1219 03:27:58.367681     592 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-api\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-api/manifests/sha256:96a702cfd3399d9eba23b3d37b09f798a4f51fcd8c8dfa8552c7829ade9c4aff: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-api-f5b56d7b9-zkjk8" podUID="be638e12-bf07-4abc-af3d-e23927785cea"
	Dec 19 03:27:58 embed-certs-536489 kubelet[592]: E1219 03:27:58.367692     592 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-8458x" podUID="47114157-df98-40be-815f-7437499ca215"
	Dec 19 03:27:59 embed-certs-536489 kubelet[592]: E1219 03:27:59.367924     592 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-web\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-web/manifests/sha256:cc7c31bd2d8470e3590dcb20fe980769b43054b31a5c5c0da606e9add898d85d: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-x8z8r" podUID="d61e9896-5a56-490e-b97a-634b1c427ce2"
	Dec 19 03:28:11 embed-certs-536489 kubelet[592]: E1219 03:28:11.368540     592 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-8458x" podUID="47114157-df98-40be-815f-7437499ca215"
	Dec 19 03:28:13 embed-certs-536489 kubelet[592]: E1219 03:28:13.368755     592 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-web\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-web/manifests/sha256:cc7c31bd2d8470e3590dcb20fe980769b43054b31a5c5c0da606e9add898d85d: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-x8z8r" podUID="d61e9896-5a56-490e-b97a-634b1c427ce2"
	Dec 19 03:28:13 embed-certs-536489 kubelet[592]: E1219 03:28:13.368817     592 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-api\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-api/manifests/sha256:96a702cfd3399d9eba23b3d37b09f798a4f51fcd8c8dfa8552c7829ade9c4aff: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-api-f5b56d7b9-zkjk8" podUID="be638e12-bf07-4abc-af3d-e23927785cea"
	Dec 19 03:28:23 embed-certs-536489 kubelet[592]: E1219 03:28:23.368451     592 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-8458x" podUID="47114157-df98-40be-815f-7437499ca215"
	Dec 19 03:28:24 embed-certs-536489 kubelet[592]: E1219 03:28:24.368263     592 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-web\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-web/manifests/sha256:cc7c31bd2d8470e3590dcb20fe980769b43054b31a5c5c0da606e9add898d85d: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-x8z8r" podUID="d61e9896-5a56-490e-b97a-634b1c427ce2"
	Dec 19 03:28:25 embed-certs-536489 kubelet[592]: E1219 03:28:25.368402     592 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-api\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-api/manifests/sha256:96a702cfd3399d9eba23b3d37b09f798a4f51fcd8c8dfa8552c7829ade9c4aff: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-api-f5b56d7b9-zkjk8" podUID="be638e12-bf07-4abc-af3d-e23927785cea"
	Dec 19 03:28:34 embed-certs-536489 kubelet[592]: E1219 03:28:34.368575     592 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-8458x" podUID="47114157-df98-40be-815f-7437499ca215"
	Dec 19 03:28:36 embed-certs-536489 kubelet[592]: E1219 03:28:36.369085     592 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-web\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-web/manifests/sha256:cc7c31bd2d8470e3590dcb20fe980769b43054b31a5c5c0da606e9add898d85d: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-x8z8r" podUID="d61e9896-5a56-490e-b97a-634b1c427ce2"
	Dec 19 03:28:39 embed-certs-536489 kubelet[592]: E1219 03:28:39.368673     592 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-api\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-api/manifests/sha256:96a702cfd3399d9eba23b3d37b09f798a4f51fcd8c8dfa8552c7829ade9c4aff: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-api-f5b56d7b9-zkjk8" podUID="be638e12-bf07-4abc-af3d-e23927785cea"
	Dec 19 03:28:48 embed-certs-536489 kubelet[592]: E1219 03:28:48.369032     592 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-8458x" podUID="47114157-df98-40be-815f-7437499ca215"
	Dec 19 03:28:51 embed-certs-536489 kubelet[592]: E1219 03:28:51.367965     592 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-web\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-web/manifests/sha256:cc7c31bd2d8470e3590dcb20fe980769b43054b31a5c5c0da606e9add898d85d: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-x8z8r" podUID="d61e9896-5a56-490e-b97a-634b1c427ce2"
	Dec 19 03:28:52 embed-certs-536489 kubelet[592]: E1219 03:28:52.367884     592 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-api\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-api/manifests/sha256:96a702cfd3399d9eba23b3d37b09f798a4f51fcd8c8dfa8552c7829ade9c4aff: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-api-f5b56d7b9-zkjk8" podUID="be638e12-bf07-4abc-af3d-e23927785cea"
	
	
	==> kubernetes-dashboard [6d2c834ee3967fcf1a232292c437f189b9a9603e0cf2d152b90241728f99395b] <==
	I1219 03:04:59.714371       1 main.go:34] "Starting Kubernetes Dashboard Auth" version="1.4.0"
	I1219 03:04:59.714452       1 init.go:49] Using in-cluster config
	I1219 03:04:59.714625       1 main.go:44] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> kubernetes-dashboard [b5a0ba5562bdd756f5240f2406266343eb667e23534e1a7d78549343008ecfaf] <==
	E1219 03:25:56.934011       1 main.go:114] Error scraping node metrics: the server is currently unable to handle the request (get nodes.metrics.k8s.io)
	E1219 03:26:56.934213       1 main.go:114] Error scraping node metrics: the server is currently unable to handle the request (get nodes.metrics.k8s.io)
	E1219 03:27:56.933982       1 main.go:114] Error scraping node metrics: the server is currently unable to handle the request (get nodes.metrics.k8s.io)
	E1219 03:28:56.934132       1 main.go:114] Error scraping node metrics: the server is currently unable to handle the request (get nodes.metrics.k8s.io)
	10.244.0.1 - - [19/Dec/2025:03:25:37 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:25:47 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:25:57 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:26:07 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:26:17 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:26:27 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:26:37 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:26:47 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:26:57 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:27:07 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:27:17 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:27:27 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:27:37 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:27:47 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:27:57 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:28:07 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:28:17 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:28:27 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:28:37 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:28:47 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:28:57 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	
	
	==> storage-provisioner [bbf97a8974843a36509fb3ed1c0f5f2bf65466a551be783aacb684ee93acde81] <==
	W1219 03:28:35.454828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:37.458524       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:37.462689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:39.465678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:39.470366       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:41.473926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:41.478355       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:43.482391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:43.487505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:45.491321       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:45.496914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:47.500158       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:47.507023       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:49.510270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:49.514443       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:51.518224       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:51.523523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:53.527125       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:53.531643       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:55.535016       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:55.539037       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:57.542752       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:57.548407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:59.552147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:59.557107       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c081d39cdf580a1c019cbb24b0e727aa96d7d5720457e45142ee1b70e3fa2ea9] <==
	I1219 03:04:51.367962       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1219 03:05:21.372851       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-536489 -n embed-certs-536489
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-536489 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: metrics-server-746fcd58dc-8458x kubernetes-dashboard-api-f5b56d7b9-zkjk8 kubernetes-dashboard-web-5c9f966b98-x8z8r
helpers_test.go:283: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context embed-certs-536489 describe pod metrics-server-746fcd58dc-8458x kubernetes-dashboard-api-f5b56d7b9-zkjk8 kubernetes-dashboard-web-5c9f966b98-x8z8r
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context embed-certs-536489 describe pod metrics-server-746fcd58dc-8458x kubernetes-dashboard-api-f5b56d7b9-zkjk8 kubernetes-dashboard-web-5c9f966b98-x8z8r: exit status 1 (66.805628ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-8458x" not found
	Error from server (NotFound): pods "kubernetes-dashboard-api-f5b56d7b9-zkjk8" not found
	Error from server (NotFound): pods "kubernetes-dashboard-web-5c9f966b98-x8z8r" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context embed-certs-536489 describe pod metrics-server-746fcd58dc-8458x kubernetes-dashboard-api-f5b56d7b9-zkjk8 kubernetes-dashboard-web-5c9f966b98-x8z8r: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (542.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (543.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:285: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-208281 -n no-preload-208281
start_stop_delete_test.go:285: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-12-19 03:29:03.364812846 +0000 UTC m=+3820.357937048
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-208281 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context no-preload-208281 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (66.668209ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "dashboard-metrics-scraper" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-208281 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-208281
helpers_test.go:244: (dbg) docker inspect no-preload-208281:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d56842232a2cb560a4581ebccdc2dd6214606b633f041f214d2117bbe9b28a48",
	        "Created": "2025-12-19T03:03:28.458126614Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 570153,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-19T03:04:42.842760214Z",
	            "FinishedAt": "2025-12-19T03:04:41.846888118Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/d56842232a2cb560a4581ebccdc2dd6214606b633f041f214d2117bbe9b28a48/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d56842232a2cb560a4581ebccdc2dd6214606b633f041f214d2117bbe9b28a48/hostname",
	        "HostsPath": "/var/lib/docker/containers/d56842232a2cb560a4581ebccdc2dd6214606b633f041f214d2117bbe9b28a48/hosts",
	        "LogPath": "/var/lib/docker/containers/d56842232a2cb560a4581ebccdc2dd6214606b633f041f214d2117bbe9b28a48/d56842232a2cb560a4581ebccdc2dd6214606b633f041f214d2117bbe9b28a48-json.log",
	        "Name": "/no-preload-208281",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-208281:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-208281",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d56842232a2cb560a4581ebccdc2dd6214606b633f041f214d2117bbe9b28a48",
	                "LowerDir": "/var/lib/docker/overlay2/621165c6a6dbe21224b134a5d17637f400f3317c9f7f38695edec03f7bfd9a68-init/diff:/var/lib/docker/overlay2/68e8325308c9e4650215fd35d4b00e1f54e6ac5929641a1bc8ed2d512448afbd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/621165c6a6dbe21224b134a5d17637f400f3317c9f7f38695edec03f7bfd9a68/merged",
	                "UpperDir": "/var/lib/docker/overlay2/621165c6a6dbe21224b134a5d17637f400f3317c9f7f38695edec03f7bfd9a68/diff",
	                "WorkDir": "/var/lib/docker/overlay2/621165c6a6dbe21224b134a5d17637f400f3317c9f7f38695edec03f7bfd9a68/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-208281",
	                "Source": "/var/lib/docker/volumes/no-preload-208281/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-208281",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-208281",
	                "name.minikube.sigs.k8s.io": "no-preload-208281",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "ffc7d31ca1ab2beeb959bdbebc852eb31e400242e6bcd8a5fef873fcf70249b3",
	            "SandboxKey": "/var/run/docker/netns/ffc7d31ca1ab",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-208281": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "37b3bc9d9fb4e4e4504c1f37f0b72e1a5a4d569ae13e2c5ab75bc3fa3aa89d9c",
	                    "EndpointID": "4745c5f264d9d95359957a2376ee9bf289d5f6ec578a858c6c46d3f5f33ee484",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "06:ee:da:be:6a:18",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-208281",
	                        "d56842232a2c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-208281 -n no-preload-208281
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-208281 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-208281 logs -n 25: (1.730431948s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───
──────────────────┐
	│ COMMAND │                                                                                                                           ARGS                                                                                                                           │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───
──────────────────┤
	│ addons  │ enable metrics-server -p no-preload-208281 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ no-preload-208281            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ stop    │ -p no-preload-208281 --alsologtostderr -v=3                                                                                                                                                                                                              │ no-preload-208281            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-002036 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ old-k8s-version-002036       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p old-k8s-version-002036 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0      │ old-k8s-version-002036       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-103644 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ default-k8s-diff-port-103644 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ stop    │ -p default-k8s-diff-port-103644 --alsologtostderr -v=3                                                                                                                                                                                                   │ default-k8s-diff-port-103644 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ addons  │ enable dashboard -p embed-certs-536489 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                            │ embed-certs-536489           │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p embed-certs-536489 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-536489           │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:10 UTC │
	│ addons  │ enable dashboard -p no-preload-208281 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                             │ no-preload-208281            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p no-preload-208281 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-208281            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:10 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-103644 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ default-k8s-diff-port-103644 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p default-k8s-diff-port-103644 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-103644 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:11 UTC │
	│ image   │ old-k8s-version-002036 image list --format=json                                                                                                                                                                                                          │ old-k8s-version-002036       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:23 UTC │
	│ pause   │ -p old-k8s-version-002036 --alsologtostderr -v=1                                                                                                                                                                                                         │ old-k8s-version-002036       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:23 UTC │
	│ unpause │ -p old-k8s-version-002036 --alsologtostderr -v=1                                                                                                                                                                                                         │ old-k8s-version-002036       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:23 UTC │
	│ delete  │ -p old-k8s-version-002036                                                                                                                                                                                                                                │ old-k8s-version-002036       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:23 UTC │
	│ delete  │ -p old-k8s-version-002036                                                                                                                                                                                                                                │ old-k8s-version-002036       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:23 UTC │
	│ start   │ -p newest-cni-017890 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-017890            │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:23 UTC │
	│ addons  │ enable metrics-server -p newest-cni-017890 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ newest-cni-017890            │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:23 UTC │
	│ stop    │ -p newest-cni-017890 --alsologtostderr -v=3                                                                                                                                                                                                              │ newest-cni-017890            │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:24 UTC │
	│ addons  │ enable dashboard -p newest-cni-017890 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                             │ newest-cni-017890            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ start   │ -p newest-cni-017890 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-017890            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │                     │
	│ image   │ embed-certs-536489 image list --format=json                                                                                                                                                                                                              │ embed-certs-536489           │ jenkins │ v1.37.0 │ 19 Dec 25 03:29 UTC │ 19 Dec 25 03:29 UTC │
	│ pause   │ -p embed-certs-536489 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-536489           │ jenkins │ v1.37.0 │ 19 Dec 25 03:29 UTC │ 19 Dec 25 03:29 UTC │
	│ unpause │ -p embed-certs-536489 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-536489           │ jenkins │ v1.37.0 │ 19 Dec 25 03:29 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───
──────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 03:24:01
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 03:24:01.551556  597843 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:24:01.551710  597843 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:24:01.551722  597843 out.go:374] Setting ErrFile to fd 2...
	I1219 03:24:01.551907  597843 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:24:01.552523  597843 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-253859/.minikube/bin
	I1219 03:24:01.553444  597843 out.go:368] Setting JSON to false
	I1219 03:24:01.554824  597843 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7581,"bootTime":1766107061,"procs":354,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 03:24:01.554941  597843 start.go:143] virtualization: kvm guest
	I1219 03:24:01.556480  597843 out.go:179] * [newest-cni-017890] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 03:24:01.557959  597843 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 03:24:01.558051  597843 notify.go:221] Checking for updates...
	I1219 03:24:01.560013  597843 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 03:24:01.561074  597843 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-253859/kubeconfig
	I1219 03:24:01.562040  597843 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-253859/.minikube
	I1219 03:24:01.563073  597843 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 03:24:01.564045  597843 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 03:24:01.565416  597843 config.go:182] Loaded profile config "newest-cni-017890": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1219 03:24:01.565980  597843 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 03:24:01.591006  597843 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 03:24:01.591163  597843 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:24:01.654185  597843 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-19 03:24:01.642935504 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:24:01.654289  597843 docker.go:319] overlay module found
	I1219 03:24:01.655871  597843 out.go:179] * Using the docker driver based on existing profile
	I1219 03:24:01.657020  597843 start.go:309] selected driver: docker
	I1219 03:24:01.657040  597843 start.go:928] validating driver "docker" against &{Name:newest-cni-017890 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-017890 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:24:01.657177  597843 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 03:24:01.657853  597843 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:24:01.714604  597843 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-19 03:24:01.704751605 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:24:01.714967  597843 start_flags.go:1012] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1219 03:24:01.715007  597843 cni.go:84] Creating CNI manager for ""
	I1219 03:24:01.715105  597843 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1219 03:24:01.715196  597843 start.go:353] cluster config:
	{Name:newest-cni-017890 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-017890 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:24:01.716916  597843 out.go:179] * Starting "newest-cni-017890" primary control-plane node in "newest-cni-017890" cluster
	I1219 03:24:01.717773  597843 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1219 03:24:01.718635  597843 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1219 03:24:01.719569  597843 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1219 03:24:01.719625  597843 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1219 03:24:01.719643  597843 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-253859/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-amd64.tar.lz4
	I1219 03:24:01.719660  597843 cache.go:65] Caching tarball of preloaded images
	I1219 03:24:01.719745  597843 preload.go:238] Found /home/jenkins/minikube-integration/22230-253859/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1219 03:24:01.719760  597843 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1219 03:24:01.719887  597843 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/newest-cni-017890/config.json ...
	I1219 03:24:01.740614  597843 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1219 03:24:01.740636  597843 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1219 03:24:01.740658  597843 cache.go:243] Successfully downloaded all kic artifacts
	I1219 03:24:01.740701  597843 start.go:360] acquireMachinesLock for newest-cni-017890: {Name:mk26fbc65f425d2942ec43638d9c096d91448606 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:24:01.740774  597843 start.go:364] duration metric: took 45.332µs to acquireMachinesLock for "newest-cni-017890"
	I1219 03:24:01.740800  597843 start.go:96] Skipping create...Using existing machine configuration
	I1219 03:24:01.740810  597843 fix.go:54] fixHost starting: 
	I1219 03:24:01.741087  597843 cli_runner.go:164] Run: docker container inspect newest-cni-017890 --format={{.State.Status}}
	I1219 03:24:01.759521  597843 fix.go:112] recreateIfNeeded on newest-cni-017890: state=Stopped err=<nil>
	W1219 03:24:01.759565  597843 fix.go:138] unexpected machine state, will restart: <nil>
	I1219 03:24:01.761307  597843 out.go:252] * Restarting existing docker container for "newest-cni-017890" ...
	I1219 03:24:01.761400  597843 cli_runner.go:164] Run: docker start newest-cni-017890
	I1219 03:24:02.017076  597843 cli_runner.go:164] Run: docker container inspect newest-cni-017890 --format={{.State.Status}}
	I1219 03:24:02.038183  597843 kic.go:430] container "newest-cni-017890" state is running.
	I1219 03:24:02.038784  597843 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-017890
	I1219 03:24:02.057901  597843 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/newest-cni-017890/config.json ...
	I1219 03:24:02.058153  597843 machine.go:94] provisionDockerMachine start ...
	I1219 03:24:02.058257  597843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-017890
	I1219 03:24:02.077426  597843 main.go:144] libmachine: Using SSH client type: native
	I1219 03:24:02.077827  597843 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1219 03:24:02.077849  597843 main.go:144] libmachine: About to run SSH command:
	hostname
	I1219 03:24:02.078525  597843 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58000->127.0.0.1:33108: read: connection reset by peer
	I1219 03:24:05.226915  597843 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-017890
	
	I1219 03:24:05.226951  597843 ubuntu.go:182] provisioning hostname "newest-cni-017890"
	I1219 03:24:05.227032  597843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-017890
	I1219 03:24:05.245436  597843 main.go:144] libmachine: Using SSH client type: native
	I1219 03:24:05.245691  597843 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1219 03:24:05.245713  597843 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-017890 && echo "newest-cni-017890" | sudo tee /etc/hostname
	I1219 03:24:05.401463  597843 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-017890
	
	I1219 03:24:05.401547  597843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-017890
	I1219 03:24:05.420728  597843 main.go:144] libmachine: Using SSH client type: native
	I1219 03:24:05.420981  597843 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1219 03:24:05.420998  597843 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-017890' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-017890/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-017890' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 03:24:05.565256  597843 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:24:05.565300  597843 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22230-253859/.minikube CaCertPath:/home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22230-253859/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22230-253859/.minikube}
	I1219 03:24:05.565337  597843 ubuntu.go:190] setting up certificates
	I1219 03:24:05.565349  597843 provision.go:84] configureAuth start
	I1219 03:24:05.565402  597843 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-017890
	I1219 03:24:05.584718  597843 provision.go:143] copyHostCerts
	I1219 03:24:05.584776  597843 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-253859/.minikube/ca.pem, removing ...
	I1219 03:24:05.584792  597843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-253859/.minikube/ca.pem
	I1219 03:24:05.584865  597843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22230-253859/.minikube/ca.pem (1078 bytes)
	I1219 03:24:05.585061  597843 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-253859/.minikube/cert.pem, removing ...
	I1219 03:24:05.585075  597843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-253859/.minikube/cert.pem
	I1219 03:24:05.585118  597843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22230-253859/.minikube/cert.pem (1123 bytes)
	I1219 03:24:05.585196  597843 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-253859/.minikube/key.pem, removing ...
	I1219 03:24:05.585204  597843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-253859/.minikube/key.pem
	I1219 03:24:05.585229  597843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22230-253859/.minikube/key.pem (1675 bytes)
	I1219 03:24:05.585291  597843 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22230-253859/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca-key.pem org=jenkins.newest-cni-017890 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-017890]
	I1219 03:24:05.728004  597843 provision.go:177] copyRemoteCerts
	I1219 03:24:05.728065  597843 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 03:24:05.728101  597843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-017890
	I1219 03:24:05.746601  597843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/newest-cni-017890/id_rsa Username:docker}
	I1219 03:24:05.850371  597843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1219 03:24:05.868683  597843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1219 03:24:05.887271  597843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1219 03:24:05.907779  597843 provision.go:87] duration metric: took 342.411585ms to configureAuth
	I1219 03:24:05.907808  597843 ubuntu.go:206] setting minikube options for container-runtime
	I1219 03:24:05.908005  597843 config.go:182] Loaded profile config "newest-cni-017890": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1219 03:24:05.908020  597843 machine.go:97] duration metric: took 3.849847847s to provisionDockerMachine
	I1219 03:24:05.908029  597843 start.go:293] postStartSetup for "newest-cni-017890" (driver="docker")
	I1219 03:24:05.908040  597843 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 03:24:05.908082  597843 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 03:24:05.908126  597843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-017890
	I1219 03:24:05.926181  597843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/newest-cni-017890/id_rsa Username:docker}
	I1219 03:24:06.031728  597843 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 03:24:06.035533  597843 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1219 03:24:06.035563  597843 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1219 03:24:06.035576  597843 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-253859/.minikube/addons for local assets ...
	I1219 03:24:06.035658  597843 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-253859/.minikube/files for local assets ...
	I1219 03:24:06.035762  597843 filesync.go:149] local asset: /home/jenkins/minikube-integration/22230-253859/.minikube/files/etc/ssl/certs/2574932.pem -> 2574932.pem in /etc/ssl/certs
	I1219 03:24:06.035894  597843 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 03:24:06.044118  597843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/files/etc/ssl/certs/2574932.pem --> /etc/ssl/certs/2574932.pem (1708 bytes)
	I1219 03:24:06.063122  597843 start.go:296] duration metric: took 155.077572ms for postStartSetup
	I1219 03:24:06.063208  597843 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 03:24:06.063254  597843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-017890
	I1219 03:24:06.081904  597843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/newest-cni-017890/id_rsa Username:docker}
	I1219 03:24:06.182234  597843 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1219 03:24:06.187009  597843 fix.go:56] duration metric: took 4.446188945s for fixHost
	I1219 03:24:06.187038  597843 start.go:83] releasing machines lock for "newest-cni-017890", held for 4.446249432s
	I1219 03:24:06.187140  597843 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-017890
	I1219 03:24:06.205248  597843 ssh_runner.go:195] Run: cat /version.json
	I1219 03:24:06.205318  597843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-017890
	I1219 03:24:06.205338  597843 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 03:24:06.205413  597843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-017890
	I1219 03:24:06.224981  597843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/newest-cni-017890/id_rsa Username:docker}
	I1219 03:24:06.225198  597843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/newest-cni-017890/id_rsa Username:docker}
	I1219 03:24:06.387880  597843 ssh_runner.go:195] Run: systemctl --version
	I1219 03:24:06.395185  597843 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 03:24:06.399905  597843 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 03:24:06.399981  597843 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 03:24:06.408633  597843 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1219 03:24:06.408657  597843 start.go:496] detecting cgroup driver to use...
	I1219 03:24:06.408690  597843 detect.go:190] detected "systemd" cgroup driver on host os
	I1219 03:24:06.408738  597843 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1219 03:24:06.425646  597843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1219 03:24:06.438969  597843 docker.go:218] disabling cri-docker service (if available) ...
	I1219 03:24:06.439029  597843 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1219 03:24:06.454372  597843 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1219 03:24:06.467895  597843 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1219 03:24:06.550002  597843 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1219 03:24:06.635564  597843 docker.go:234] disabling docker service ...
	I1219 03:24:06.635686  597843 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1219 03:24:06.651872  597843 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1219 03:24:06.665206  597843 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1219 03:24:06.751936  597843 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1219 03:24:06.835058  597843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1219 03:24:06.847793  597843 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 03:24:06.862660  597843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1219 03:24:06.872135  597843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1219 03:24:06.881406  597843 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1219 03:24:06.881467  597843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1219 03:24:06.891421  597843 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1219 03:24:06.900927  597843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1219 03:24:06.910491  597843 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1219 03:24:06.919635  597843 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 03:24:06.927769  597843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1219 03:24:06.936983  597843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1219 03:24:06.946749  597843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1219 03:24:06.956517  597843 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 03:24:06.964790  597843 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 03:24:06.973869  597843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:24:07.059555  597843 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1219 03:24:07.167126  597843 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1219 03:24:07.167191  597843 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1219 03:24:07.171613  597843 start.go:564] Will wait 60s for crictl version
	I1219 03:24:07.171670  597843 ssh_runner.go:195] Run: which crictl
	I1219 03:24:07.175300  597843 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1219 03:24:07.201572  597843 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1219 03:24:07.201669  597843 ssh_runner.go:195] Run: containerd --version
	I1219 03:24:07.224745  597843 ssh_runner.go:195] Run: containerd --version
	I1219 03:24:07.248441  597843 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1219 03:24:07.249524  597843 cli_runner.go:164] Run: docker network inspect newest-cni-017890 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1219 03:24:07.267447  597843 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1219 03:24:07.271636  597843 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:24:07.283419  597843 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1219 03:24:07.284366  597843 kubeadm.go:884] updating cluster {Name:newest-cni-017890 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-017890 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountS
tring: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 03:24:07.284534  597843 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1219 03:24:07.284616  597843 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:24:07.311719  597843 containerd.go:627] all images are preloaded for containerd runtime.
	I1219 03:24:07.311745  597843 containerd.go:534] Images already preloaded, skipping extraction
	I1219 03:24:07.311808  597843 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:24:07.338880  597843 containerd.go:627] all images are preloaded for containerd runtime.
	I1219 03:24:07.338904  597843 cache_images.go:86] Images are preloaded, skipping loading
	I1219 03:24:07.338911  597843 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0-rc.1 containerd true true} ...
	I1219 03:24:07.339018  597843 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-017890 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-017890 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 03:24:07.339070  597843 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1219 03:24:07.366831  597843 cni.go:84] Creating CNI manager for ""
	I1219 03:24:07.366854  597843 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1219 03:24:07.366877  597843 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1219 03:24:07.366914  597843 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-017890 NodeName:newest-cni-017890 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 03:24:07.367047  597843 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-017890"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 03:24:07.367129  597843 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1219 03:24:07.375406  597843 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 03:24:07.375480  597843 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 03:24:07.383556  597843 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1219 03:24:07.397253  597843 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1219 03:24:07.410328  597843 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1219 03:24:07.422986  597843 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1219 03:24:07.426716  597843 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:24:07.436890  597843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:24:07.521758  597843 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:24:07.548164  597843 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/newest-cni-017890 for IP: 192.168.103.2
	I1219 03:24:07.548185  597843 certs.go:195] generating shared ca certs ...
	I1219 03:24:07.548199  597843 certs.go:227] acquiring lock for ca certs: {Name:mk50e31410087b5c6cdb0986368a8c9100618403 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:07.548383  597843 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-253859/.minikube/ca.key
	I1219 03:24:07.548436  597843 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-253859/.minikube/proxy-client-ca.key
	I1219 03:24:07.548447  597843 certs.go:257] generating profile certs ...
	I1219 03:24:07.548530  597843 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/newest-cni-017890/client.key
	I1219 03:24:07.548623  597843 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/newest-cni-017890/apiserver.key.6f00a9c9
	I1219 03:24:07.548670  597843 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/newest-cni-017890/proxy-client.key
	I1219 03:24:07.548771  597843 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/257493.pem (1338 bytes)
	W1219 03:24:07.548802  597843 certs.go:480] ignoring /home/jenkins/minikube-integration/22230-253859/.minikube/certs/257493_empty.pem, impossibly tiny 0 bytes
	I1219 03:24:07.548812  597843 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca-key.pem (1679 bytes)
	I1219 03:24:07.548850  597843 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/ca.pem (1078 bytes)
	I1219 03:24:07.548874  597843 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/cert.pem (1123 bytes)
	I1219 03:24:07.548909  597843 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/certs/key.pem (1675 bytes)
	I1219 03:24:07.548958  597843 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-253859/.minikube/files/etc/ssl/certs/2574932.pem (1708 bytes)
	I1219 03:24:07.549552  597843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 03:24:07.570143  597843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 03:24:07.590219  597843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 03:24:07.610623  597843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1219 03:24:07.634629  597843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/newest-cni-017890/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1219 03:24:07.657557  597843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/newest-cni-017890/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1219 03:24:07.676631  597843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/newest-cni-017890/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 03:24:07.694598  597843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/newest-cni-017890/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1219 03:24:07.712959  597843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 03:24:07.731244  597843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/certs/257493.pem --> /usr/share/ca-certificates/257493.pem (1338 bytes)
	I1219 03:24:07.750319  597843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-253859/.minikube/files/etc/ssl/certs/2574932.pem --> /usr/share/ca-certificates/2574932.pem (1708 bytes)
	I1219 03:24:07.771334  597843 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 03:24:07.785765  597843 ssh_runner.go:195] Run: openssl version
	I1219 03:24:07.793440  597843 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/257493.pem
	I1219 03:24:07.801622  597843 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/257493.pem /etc/ssl/certs/257493.pem
	I1219 03:24:07.809393  597843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/257493.pem
	I1219 03:24:07.813382  597843 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 19 02:34 /usr/share/ca-certificates/257493.pem
	I1219 03:24:07.813449  597843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/257493.pem
	I1219 03:24:07.850229  597843 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1219 03:24:07.858513  597843 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2574932.pem
	I1219 03:24:07.865970  597843 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2574932.pem /etc/ssl/certs/2574932.pem
	I1219 03:24:07.873380  597843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2574932.pem
	I1219 03:24:07.877201  597843 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 19 02:34 /usr/share/ca-certificates/2574932.pem
	I1219 03:24:07.877255  597843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2574932.pem
	I1219 03:24:07.913110  597843 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1219 03:24:07.921263  597843 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:24:07.930204  597843 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 03:24:07.938297  597843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:24:07.942462  597843 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:26 /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:24:07.942530  597843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:24:07.979726  597843 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 03:24:07.987766  597843 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 03:24:07.991691  597843 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1219 03:24:08.028949  597843 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1219 03:24:08.064356  597843 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1219 03:24:08.116711  597843 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1219 03:24:08.176379  597843 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1219 03:24:08.232783  597843 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1219 03:24:08.284707  597843 kubeadm.go:401] StartCluster: {Name:newest-cni-017890 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-017890 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountStri
ng: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:24:08.284882  597843 cri.go:57] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1219 03:24:08.284957  597843 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:24:08.333947  597843 cri.go:92] found id: "83c773e18493117a84ba6126e570660dda5e21ff831cb599f1a6b9387970232e"
	I1219 03:24:08.333977  597843 cri.go:92] found id: "979a7c551bb4e02d089f4c1e13102bf59ffd1b8be4a106c33d47bbdea6aab91c"
	I1219 03:24:08.333982  597843 cri.go:92] found id: "53656077c17e3d1cee84f578c4df1231a9fd6794af10228debb28e2afd0744dd"
	I1219 03:24:08.333986  597843 cri.go:92] found id: "8f86dd795e5932f8f4677c0dedd84373f623a55027285f0293f430f630177b5f"
	I1219 03:24:08.333990  597843 cri.go:92] found id: "acb96d7d74ab88f884b6bcdeb54e82450832b1f30782d5bfa67316f6ec297f53"
	I1219 03:24:08.333994  597843 cri.go:92] found id: "73fb53cbfdbfe23af6b14e5ace7d3ef32097074620b3f59326a8bb6ef351fc41"
	I1219 03:24:08.333998  597843 cri.go:92] found id: "aa62d373b4db03ea13900f92ce5fa0b1c75228547e72f0b8988617a9423b8531"
	I1219 03:24:08.334011  597843 cri.go:92] found id: "034ac5a3e39903cc61975f0b912f650b8223647d9830ebc9479f6cc938e8a264"
	I1219 03:24:08.334015  597843 cri.go:92] found id: "a3c4dff33f0219049afb9aaf4cf5be196e7dcb3d952e9ea0062808a832ca66bd"
	I1219 03:24:08.334025  597843 cri.go:92] found id: ""
	I1219 03:24:08.334078  597843 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1219 03:24:08.366112  597843 cri.go:119] JSON = [{"ociVersion":"1.2.1","id":"36c65629ff3db0c00597a0df765319f1a22752b28262d4bfc1d334b926fa257f","pid":832,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/36c65629ff3db0c00597a0df765319f1a22752b28262d4bfc1d334b926fa257f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/36c65629ff3db0c00597a0df765319f1a22752b28262d4bfc1d334b926fa257f/rootfs","created":"2025-12-19T03:24:08.141421132Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"36c65629ff3db0c00597a0df765319f1a22752b28262d4bfc1d334b926fa257f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-newest-cni-017890_ed97dfd8df93f9e7cb623b478ae52abb","io.kubernetes.cri.sand
box-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-newest-cni-017890","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"ed97dfd8df93f9e7cb623b478ae52abb"},"owner":"root"},{"ociVersion":"1.2.1","id":"53656077c17e3d1cee84f578c4df1231a9fd6794af10228debb28e2afd0744dd","pid":946,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/53656077c17e3d1cee84f578c4df1231a9fd6794af10228debb28e2afd0744dd","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/53656077c17e3d1cee84f578c4df1231a9fd6794af10228debb28e2afd0744dd/rootfs","created":"2025-12-19T03:24:08.277254842Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.35.0-rc.1","io.kubernetes.cri.sandbox-id":"7ae918824c51dd446480e7e24f8a6d7095c8b146c5aa20de5db0c2312a74668d","io.kubernetes.cri.sandbox-name":"kube-apiserver-newest-cni-017890","io.kub
ernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"9dc1978d92ca7a4bd42b11590dd8157f"},"owner":"root"},{"ociVersion":"1.2.1","id":"7ae918824c51dd446480e7e24f8a6d7095c8b146c5aa20de5db0c2312a74668d","pid":819,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7ae918824c51dd446480e7e24f8a6d7095c8b146c5aa20de5db0c2312a74668d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7ae918824c51dd446480e7e24f8a6d7095c8b146c5aa20de5db0c2312a74668d/rootfs","created":"2025-12-19T03:24:08.136706319Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"7ae918824c51dd446480e7e24f8a6d7095c8b146c5aa20de5db0c2312a74668d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-newest-cni-017
890_9dc1978d92ca7a4bd42b11590dd8157f","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-newest-cni-017890","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"9dc1978d92ca7a4bd42b11590dd8157f"},"owner":"root"},{"ociVersion":"1.2.1","id":"83c773e18493117a84ba6126e570660dda5e21ff831cb599f1a6b9387970232e","pid":985,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/83c773e18493117a84ba6126e570660dda5e21ff831cb599f1a6b9387970232e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/83c773e18493117a84ba6126e570660dda5e21ff831cb599f1a6b9387970232e/rootfs","created":"2025-12-19T03:24:08.296298017Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.6.6-0","io.kubernetes.cri.sandbox-id":"b7b01a51aff40ded1de11025c04b4baf9849da2ab04bda586c00d46fc62f4e46","io.kubernetes.cri.sandbox-name":"etcd-newest-cn
i-017890","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"9bba4882296cd6b19c27ea1449a86cb4"},"owner":"root"},{"ociVersion":"1.2.1","id":"8f86dd795e5932f8f4677c0dedd84373f623a55027285f0293f430f630177b5f","pid":939,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8f86dd795e5932f8f4677c0dedd84373f623a55027285f0293f430f630177b5f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8f86dd795e5932f8f4677c0dedd84373f623a55027285f0293f430f630177b5f/rootfs","created":"2025-12-19T03:24:08.275699538Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.35.0-rc.1","io.kubernetes.cri.sandbox-id":"36c65629ff3db0c00597a0df765319f1a22752b28262d4bfc1d334b926fa257f","io.kubernetes.cri.sandbox-name":"kube-controller-manager-newest-cni-017890","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kuberne
tes.cri.sandbox-uid":"ed97dfd8df93f9e7cb623b478ae52abb"},"owner":"root"},{"ociVersion":"1.2.1","id":"979a7c551bb4e02d089f4c1e13102bf59ffd1b8be4a106c33d47bbdea6aab91c","pid":977,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/979a7c551bb4e02d089f4c1e13102bf59ffd1b8be4a106c33d47bbdea6aab91c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/979a7c551bb4e02d089f4c1e13102bf59ffd1b8be4a106c33d47bbdea6aab91c/rootfs","created":"2025-12-19T03:24:08.280803642Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.35.0-rc.1","io.kubernetes.cri.sandbox-id":"d8f4f219d9857ab41382ecd4e2b1eea546ef326c4c32b3cdaa2da5afeadc71fa","io.kubernetes.cri.sandbox-name":"kube-scheduler-newest-cni-017890","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"d7a25d2998d02ae38bf3d0b066059a6f"},"owner":"root"},{"ociVersion":"1.2.1","id":
"b7b01a51aff40ded1de11025c04b4baf9849da2ab04bda586c00d46fc62f4e46","pid":877,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b7b01a51aff40ded1de11025c04b4baf9849da2ab04bda586c00d46fc62f4e46","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b7b01a51aff40ded1de11025c04b4baf9849da2ab04bda586c00d46fc62f4e46/rootfs","created":"2025-12-19T03:24:08.165286073Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"b7b01a51aff40ded1de11025c04b4baf9849da2ab04bda586c00d46fc62f4e46","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-newest-cni-017890_9bba4882296cd6b19c27ea1449a86cb4","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-newest-cni-017890","io.kubernetes.cri.sandbox-name
space":"kube-system","io.kubernetes.cri.sandbox-uid":"9bba4882296cd6b19c27ea1449a86cb4"},"owner":"root"},{"ociVersion":"1.2.1","id":"d8f4f219d9857ab41382ecd4e2b1eea546ef326c4c32b3cdaa2da5afeadc71fa","pid":870,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d8f4f219d9857ab41382ecd4e2b1eea546ef326c4c32b3cdaa2da5afeadc71fa","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d8f4f219d9857ab41382ecd4e2b1eea546ef326c4c32b3cdaa2da5afeadc71fa/rootfs","created":"2025-12-19T03:24:08.164795348Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"d8f4f219d9857ab41382ecd4e2b1eea546ef326c4c32b3cdaa2da5afeadc71fa","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-newest-cni-017890_d7a25d2998d02ae38bf3
d0b066059a6f","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-newest-cni-017890","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"d7a25d2998d02ae38bf3d0b066059a6f"},"owner":"root"}]
	I1219 03:24:08.366321  597843 cri.go:129] list returned 8 containers
	I1219 03:24:08.366344  597843 cri.go:132] container: {ID:36c65629ff3db0c00597a0df765319f1a22752b28262d4bfc1d334b926fa257f Status:running}
	I1219 03:24:08.366363  597843 cri.go:134] skipping 36c65629ff3db0c00597a0df765319f1a22752b28262d4bfc1d334b926fa257f - not in ps
	I1219 03:24:08.366370  597843 cri.go:132] container: {ID:53656077c17e3d1cee84f578c4df1231a9fd6794af10228debb28e2afd0744dd Status:running}
	I1219 03:24:08.366379  597843 cri.go:138] skipping {53656077c17e3d1cee84f578c4df1231a9fd6794af10228debb28e2afd0744dd running}: state = "running", want "paused"
	I1219 03:24:08.366388  597843 cri.go:132] container: {ID:7ae918824c51dd446480e7e24f8a6d7095c8b146c5aa20de5db0c2312a74668d Status:running}
	I1219 03:24:08.366394  597843 cri.go:134] skipping 7ae918824c51dd446480e7e24f8a6d7095c8b146c5aa20de5db0c2312a74668d - not in ps
	I1219 03:24:08.366399  597843 cri.go:132] container: {ID:83c773e18493117a84ba6126e570660dda5e21ff831cb599f1a6b9387970232e Status:running}
	I1219 03:24:08.366407  597843 cri.go:138] skipping {83c773e18493117a84ba6126e570660dda5e21ff831cb599f1a6b9387970232e running}: state = "running", want "paused"
	I1219 03:24:08.366413  597843 cri.go:132] container: {ID:8f86dd795e5932f8f4677c0dedd84373f623a55027285f0293f430f630177b5f Status:running}
	I1219 03:24:08.366419  597843 cri.go:138] skipping {8f86dd795e5932f8f4677c0dedd84373f623a55027285f0293f430f630177b5f running}: state = "running", want "paused"
	I1219 03:24:08.366426  597843 cri.go:132] container: {ID:979a7c551bb4e02d089f4c1e13102bf59ffd1b8be4a106c33d47bbdea6aab91c Status:running}
	I1219 03:24:08.366433  597843 cri.go:138] skipping {979a7c551bb4e02d089f4c1e13102bf59ffd1b8be4a106c33d47bbdea6aab91c running}: state = "running", want "paused"
	I1219 03:24:08.366439  597843 cri.go:132] container: {ID:b7b01a51aff40ded1de11025c04b4baf9849da2ab04bda586c00d46fc62f4e46 Status:running}
	I1219 03:24:08.366446  597843 cri.go:134] skipping b7b01a51aff40ded1de11025c04b4baf9849da2ab04bda586c00d46fc62f4e46 - not in ps
	I1219 03:24:08.366456  597843 cri.go:132] container: {ID:d8f4f219d9857ab41382ecd4e2b1eea546ef326c4c32b3cdaa2da5afeadc71fa Status:running}
	I1219 03:24:08.366463  597843 cri.go:134] skipping d8f4f219d9857ab41382ecd4e2b1eea546ef326c4c32b3cdaa2da5afeadc71fa - not in ps
	I1219 03:24:08.366521  597843 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1219 03:24:08.380825  597843 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1219 03:24:08.380845  597843 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1219 03:24:08.380896  597843 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1219 03:24:08.391924  597843 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1219 03:24:08.393405  597843 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-017890" does not appear in /home/jenkins/minikube-integration/22230-253859/kubeconfig
	I1219 03:24:08.394392  597843 kubeconfig.go:62] /home/jenkins/minikube-integration/22230-253859/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-017890" cluster setting kubeconfig missing "newest-cni-017890" context setting]
	I1219 03:24:08.395869  597843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-253859/kubeconfig: {Name:mk882428f840659847d4e22cffee2d7775067610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:08.398165  597843 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1219 03:24:08.408442  597843 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1219 03:24:08.408473  597843 kubeadm.go:602] duration metric: took 27.622322ms to restartPrimaryControlPlane
	I1219 03:24:08.408480  597843 kubeadm.go:403] duration metric: took 123.791788ms to StartCluster
	I1219 03:24:08.408494  597843 settings.go:142] acquiring lock: {Name:mkabb1ebf75b28a37c7b2b053110889b555ff453 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:08.408540  597843 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-253859/kubeconfig
	I1219 03:24:08.410289  597843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-253859/kubeconfig: {Name:mk882428f840659847d4e22cffee2d7775067610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:08.410549  597843 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1219 03:24:08.410662  597843 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:24:08.410772  597843 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-017890"
	I1219 03:24:08.410804  597843 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-017890"
	I1219 03:24:08.410797  597843 addons.go:70] Setting default-storageclass=true in profile "newest-cni-017890"
	I1219 03:24:08.410808  597843 addons.go:70] Setting dashboard=true in profile "newest-cni-017890"
	I1219 03:24:08.410827  597843 addons.go:70] Setting metrics-server=true in profile "newest-cni-017890"
	I1219 03:24:08.410838  597843 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-017890"
	I1219 03:24:08.410841  597843 addons.go:239] Setting addon metrics-server=true in "newest-cni-017890"
	I1219 03:24:08.410850  597843 addons.go:239] Setting addon dashboard=true in "newest-cni-017890"
	W1219 03:24:08.410851  597843 addons.go:248] addon metrics-server should already be in state true
	I1219 03:24:08.410859  597843 config.go:182] Loaded profile config "newest-cni-017890": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	W1219 03:24:08.410861  597843 addons.go:248] addon dashboard should already be in state true
	W1219 03:24:08.410813  597843 addons.go:248] addon storage-provisioner should already be in state true
	I1219 03:24:08.410917  597843 host.go:66] Checking if "newest-cni-017890" exists ...
	I1219 03:24:08.410947  597843 host.go:66] Checking if "newest-cni-017890" exists ...
	I1219 03:24:08.410911  597843 host.go:66] Checking if "newest-cni-017890" exists ...
	I1219 03:24:08.411222  597843 cli_runner.go:164] Run: docker container inspect newest-cni-017890 --format={{.State.Status}}
	I1219 03:24:08.411370  597843 cli_runner.go:164] Run: docker container inspect newest-cni-017890 --format={{.State.Status}}
	I1219 03:24:08.411453  597843 cli_runner.go:164] Run: docker container inspect newest-cni-017890 --format={{.State.Status}}
	I1219 03:24:08.411453  597843 cli_runner.go:164] Run: docker container inspect newest-cni-017890 --format={{.State.Status}}
	I1219 03:24:08.415989  597843 out.go:179] * Verifying Kubernetes components...
	I1219 03:24:08.417155  597843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:24:08.439554  597843 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1219 03:24:08.439613  597843 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:24:08.439993  597843 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:24:08.440027  597843 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:24:08.440166  597843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-017890
	I1219 03:24:08.440651  597843 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1219 03:24:08.440669  597843 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1219 03:24:08.440698  597843 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:24:08.440710  597843 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:24:08.440720  597843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-017890
	I1219 03:24:08.440769  597843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-017890
	I1219 03:24:08.441275  597843 addons.go:239] Setting addon default-storageclass=true in "newest-cni-017890"
	W1219 03:24:08.441297  597843 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:24:08.441324  597843 host.go:66] Checking if "newest-cni-017890" exists ...
	I1219 03:24:08.441824  597843 cli_runner.go:164] Run: docker container inspect newest-cni-017890 --format={{.State.Status}}
	I1219 03:24:08.475608  597843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/newest-cni-017890/id_rsa Username:docker}
	I1219 03:24:08.478067  597843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/newest-cni-017890/id_rsa Username:docker}
	I1219 03:24:08.478685  597843 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:24:08.478765  597843 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:24:08.478865  597843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-017890
	I1219 03:24:08.480899  597843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/newest-cni-017890/id_rsa Username:docker}
	I1219 03:24:08.504129  597843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/newest-cni-017890/id_rsa Username:docker}
	I1219 03:24:08.591723  597843 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:24:08.609742  597843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:24:08.613751  597843 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:24:08.613830  597843 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:24:08.614709  597843 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1219 03:24:08.614732  597843 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1219 03:24:08.615956  597843 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:24:08.629974  597843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:24:08.632026  597843 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1219 03:24:08.632052  597843 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1219 03:24:08.648116  597843 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:24:08.648143  597843 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1219 03:24:08.667278  597843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:24:10.885906  597843 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.276119079s)
	I1219 03:24:10.885953  597843 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.2721014s)
	I1219 03:24:10.885974  597843 api_server.go:72] duration metric: took 2.47539585s to wait for apiserver process to appear ...
	I1219 03:24:10.885982  597843 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:24:10.886000  597843 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1219 03:24:10.886017  597843 ssh_runner.go:235] Completed: test -f /usr/bin/helm: (2.270030396s)
	I1219 03:24:10.886076  597843 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.256071225s)
	I1219 03:24:10.886092  597843 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:24:10.895418  597843 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:24:10.895447  597843 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:24:10.896082  597843 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.228772183s)
	I1219 03:24:10.896112  597843 addons.go:500] Verifying addon metrics-server=true in "newest-cni-017890"
	I1219 03:24:10.896177  597843 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:24:10.896434  597843 cli_runner.go:164] Run: docker container inspect newest-cni-017890 --format={{.State.Status}}
	I1219 03:24:11.387109  597843 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1219 03:24:11.392750  597843 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:24:11.392777  597843 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:24:11.857689  597843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:24:11.886949  597843 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1219 03:24:11.891539  597843 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1219 03:24:11.892806  597843 api_server.go:141] control plane version: v1.35.0-rc.1
	I1219 03:24:11.892838  597843 api_server.go:131] duration metric: took 1.006848607s to wait for apiserver health ...
	I1219 03:24:11.892850  597843 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:24:11.896503  597843 system_pods.go:59] 9 kube-system pods found
	I1219 03:24:11.896560  597843 system_pods.go:61] "coredns-7d764666f9-77plh" [513e22e1-2bdb-4f22-ae50-4988898984ee] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1219 03:24:11.896603  597843 system_pods.go:61] "etcd-newest-cni-017890" [4b1f61d0-c938-4a8c-8a60-c0190ac59d3f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:24:11.896621  597843 system_pods.go:61] "kindnet-jptjc" [9cc93749-83ad-4d71-a1ca-7582fb36d8c0] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1219 03:24:11.896634  597843 system_pods.go:61] "kube-apiserver-newest-cni-017890" [f7690839-3434-49cd-ae45-a24dc25ef4f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:24:11.896645  597843 system_pods.go:61] "kube-controller-manager-newest-cni-017890" [50e7f6db-1f18-406d-b180-828907d447ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:24:11.896652  597843 system_pods.go:61] "kube-proxy-27f9l" [708d3c46-c794-4656-affd-5f14f12062b3] Running
	I1219 03:24:11.896670  597843 system_pods.go:61] "kube-scheduler-newest-cni-017890" [60d5eb7f-c30c-4924-aef0-92e778571bc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:24:11.896683  597843 system_pods.go:61] "metrics-server-5d785b57d4-qqtx5" [64c5d9c6-5ca0-41f1-8571-d7bcc7cf596f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1219 03:24:11.896695  597843 system_pods.go:61] "storage-provisioner" [7554cce6-ffd3-49e9-bb5d-841ea14e11c4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1219 03:24:11.896706  597843 system_pods.go:74] duration metric: took 3.847345ms to wait for pod list to return data ...
	I1219 03:24:11.896721  597843 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:24:11.899244  597843 default_sa.go:45] found service account: "default"
	I1219 03:24:11.899274  597843 default_sa.go:55] duration metric: took 2.538656ms for default service account to be created ...
	I1219 03:24:11.899289  597843 kubeadm.go:587] duration metric: took 3.48870959s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1219 03:24:11.899309  597843 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:24:11.901655  597843 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:24:11.901684  597843 node_conditions.go:123] node cpu capacity is 8
	I1219 03:24:11.901698  597843 node_conditions.go:105] duration metric: took 2.383054ms to run NodePressure ...
	I1219 03:24:11.901714  597843 start.go:242] waiting for startup goroutines ...
	I1219 03:24:14.918167  597843 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.060422646s)
	I1219 03:24:14.918274  597843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:24:15.111355  597843 addons.go:500] Verifying addon dashboard=true in "newest-cni-017890"
	I1219 03:24:15.111641  597843 cli_runner.go:164] Run: docker container inspect newest-cni-017890 --format={{.State.Status}}
	I1219 03:24:15.131003  597843 out.go:179] * Verifying dashboard addon...
	I1219 03:24:15.133142  597843 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:24:15.136271  597843 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:24:15.136289  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:15.636708  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:16.136217  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:16.636609  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:17.136118  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:17.636688  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:18.136697  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:18.636955  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:19.136327  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:19.636958  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:20.136611  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:20.635702  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:21.136714  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:21.637785  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:22.136226  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:22.636384  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:23.137131  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:23.636716  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:24.136918  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:24.636812  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:25.136713  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:25.636380  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:26.137306  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:26.637061  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:27.136404  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:27.637289  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:28.137036  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:28.636845  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:29.136941  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:29.636389  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:30.136680  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:30.637070  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:31.137104  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:31.636133  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:32.137116  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:32.636853  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:33.136443  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:33.637916  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:34.136816  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:34.636273  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:35.136706  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:35.636531  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:36.137637  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:36.637362  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:37.137059  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:37.636929  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:38.136195  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:38.637085  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:39.137374  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:39.637573  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:40.136963  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:40.637000  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:41.137191  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:41.637408  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:42.138089  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:42.636887  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:43.136768  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:43.637307  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:44.136868  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:44.636280  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:45.137216  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:45.637469  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:46.137951  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:46.638113  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:47.137452  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:47.638245  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:48.137481  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:48.637512  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:49.138424  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:49.637833  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:50.136515  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:50.637554  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:51.137416  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:51.637141  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:52.136976  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:52.636826  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:53.136387  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:53.637693  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:54.136719  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:54.637073  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:55.136721  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:55.636844  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:56.136161  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:56.637019  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:57.137210  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:57.637438  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:58.136982  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:58.636673  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:59.136739  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:59.636844  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:00.136445  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:00.637084  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:01.137429  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:01.637053  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:02.137302  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:02.637122  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:03.136980  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:03.637104  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:04.136665  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:04.636249  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:05.137120  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:05.637143  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:06.136858  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:06.636687  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:07.137550  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:07.637626  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:08.136466  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:08.636747  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:09.136344  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:09.637454  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:10.137285  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:10.636374  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:11.137391  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:11.637433  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:12.137711  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:12.636855  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:13.136825  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:13.637754  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:14.136333  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:14.636435  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:15.137225  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:15.638254  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:16.136303  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:16.637149  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:17.136452  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:17.637889  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:18.137092  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:18.637148  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:19.136614  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:19.637267  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:20.136982  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:20.636667  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:21.137862  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:21.636757  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:22.137106  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:22.637014  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:23.137497  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:23.638022  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:24.136372  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:24.636966  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:25.136133  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:25.636703  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:26.137357  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:26.637010  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:27.136149  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:27.637046  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:28.136940  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:28.636297  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:29.136898  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:29.637948  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:30.136225  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:30.636410  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:31.136857  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:31.636768  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:32.137112  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:32.637013  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:33.136801  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:33.637173  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:34.136985  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:34.637410  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:35.136679  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:35.637618  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:36.136542  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:36.636814  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:37.137275  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:37.636826  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:38.137234  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:38.636757  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:39.137687  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:39.636793  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:40.136406  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:40.636794  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:41.137255  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:41.637730  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:42.137440  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:42.637373  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:43.137017  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:43.636001  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:44.137453  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:44.636420  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:45.136827  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:45.637157  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:46.136834  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:46.636215  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:47.136987  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:47.636812  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:48.137673  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:48.637054  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:49.137764  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:49.637702  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:50.137443  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:50.637289  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:51.136306  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:51.637782  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:52.137532  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:52.637547  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:53.137622  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:53.637007  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:54.136819  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:54.637155  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:55.136635  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:55.636965  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:56.136783  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:56.636658  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:57.137690  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:57.637200  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:58.137067  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:58.636946  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:59.136283  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:59.636323  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:00.137300  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:00.636405  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:01.136751  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:01.637661  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:02.137235  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:02.636716  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:03.137766  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:03.637298  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:04.137292  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:04.636635  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:05.137202  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:05.636930  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:06.136532  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:06.637151  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:07.136713  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:07.637307  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:08.137049  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:08.636569  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:09.136927  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:09.636099  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:10.137528  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:10.636826  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:11.136021  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:11.636945  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:12.136639  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:12.637156  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:13.137203  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:13.636831  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:14.137420  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:14.636800  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:15.137333  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:15.636779  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:16.136736  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:16.637704  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:17.137952  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:17.636997  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:18.136597  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:18.637167  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:19.137123  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:19.637021  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:20.136485  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:20.636973  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:21.137144  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:21.636913  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:22.136762  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:22.637427  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:23.136986  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:23.637032  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:24.137183  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:24.636455  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:25.137004  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:25.636380  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:26.137201  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:26.636875  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:27.137320  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:27.636651  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:28.137532  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:28.637210  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:29.136619  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:29.636965  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:30.136354  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:30.637248  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:31.136725  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:31.637694  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:32.137396  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:32.636869  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:33.137085  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:33.636846  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:34.136925  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:34.637345  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:35.136941  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:35.636442  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:36.137281  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:36.636711  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:37.137569  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:37.637035  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:38.136758  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:38.637036  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:39.136256  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:39.636846  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:40.136989  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:40.637225  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:41.136541  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:41.637789  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:42.137527  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:42.636993  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:43.136650  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:43.637606  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:44.137292  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:44.636780  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:45.136082  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:45.636481  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:46.137153  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:46.636938  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:47.136981  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:47.636554  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:48.137071  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:48.636436  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:49.137188  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:49.636895  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:50.136962  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:50.636858  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:51.136241  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:51.636827  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:52.136158  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:52.636904  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:53.136417  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:53.636854  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:54.136514  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:54.637210  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:55.136962  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:55.636541  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:56.137674  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:56.637087  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:57.137027  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:57.636944  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:58.137419  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:58.636918  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:59.136620  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:26:59.636357  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:00.138126  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:00.636843  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:01.137033  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:01.637448  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:02.137774  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:02.636446  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:03.137082  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:03.636266  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:04.137272  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:04.637739  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:05.136651  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:05.636906  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:06.137074  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:06.636350  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:07.137001  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:07.636778  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:08.136820  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:08.636343  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:09.136822  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:09.636338  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:10.137325  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:10.637072  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:11.136792  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:11.636945  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:12.137216  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:12.637335  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:13.137396  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:13.637294  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:14.137451  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:14.637305  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:15.137060  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:15.636411  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:16.137794  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:16.636919  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:17.137161  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:17.636895  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:18.136747  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:18.636793  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:19.136676  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:19.636489  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:20.137887  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:20.636726  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:21.136849  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:21.637010  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:22.137478  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:22.636959  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:23.136870  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:23.636332  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:24.137333  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:24.637185  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:25.137094  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:25.636763  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:26.136851  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:26.636573  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:27.136382  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:27.637280  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:28.137457  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:28.637119  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:29.137389  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:29.637135  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:30.137267  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:30.637020  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:31.136931  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:31.636968  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:32.136865  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:32.636412  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:33.137169  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:33.636666  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:34.137157  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:34.637257  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:35.137335  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:35.637047  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:36.137320  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:36.637022  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:37.136993  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:37.636640  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:38.137710  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:38.637233  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:39.137559  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:39.636774  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:40.137130  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:40.636308  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:41.137351  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:41.637374  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:42.137415  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:42.637112  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:43.136721  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:43.636004  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:44.136561  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:44.637219  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:45.137496  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:45.637023  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:46.137155  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:46.636758  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:47.136317  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:47.636753  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:48.136532  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:48.637519  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:49.137077  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:49.636614  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:50.137114  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:50.637525  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:51.137508  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:51.637298  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:52.136872  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:52.637272  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:53.137015  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:53.636431  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:54.137416  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:54.637035  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:55.137110  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:55.637210  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:56.137057  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:56.637186  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:57.136809  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:57.636264  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:58.137183  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:58.637064  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:59.137021  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:27:59.636373  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:00.137042  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:00.637037  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:01.136739  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:01.637506  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:02.137594  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:02.636621  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:03.136048  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:03.636648  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:04.137539  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:04.638195  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:05.137062  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:05.636692  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:06.136705  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:06.637246  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:07.136664  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:07.636270  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:08.137034  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:08.636993  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:09.136663  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:09.636192  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:10.138561  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:10.637014  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:11.136201  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:11.636672  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:12.136307  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:12.637388  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:13.137402  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:13.636976  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:14.137200  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:14.637283  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:15.137462  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:15.636529  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:16.136520  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:16.637243  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:17.136999  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:17.636376  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:18.136827  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:18.636466  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:19.137540  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:19.637217  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:20.136679  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:20.636967  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:21.137000  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:21.636882  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:22.137050  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:22.637396  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:23.136879  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:23.636765  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:24.136452  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:24.637460  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:25.137496  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:25.637978  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:26.136761  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:26.636455  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:27.137433  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:27.636948  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:28.136898  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:28.636542  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:29.137099  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:29.636660  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:30.136900  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:30.637057  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:31.137325  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:31.636916  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:32.137535  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:32.637686  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:33.136638  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:33.636265  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:34.136670  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:34.636540  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:35.137260  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:35.637367  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:36.136933  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:36.637243  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:37.137848  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:37.636874  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:38.136665  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:38.638112  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:39.136729  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:39.636438  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:40.136638  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:40.636163  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:41.137103  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:41.636882  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:42.136659  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:42.636677  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:43.136878  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:43.636711  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:44.136472  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:44.637566  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:45.136801  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:45.636129  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:46.137204  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:46.637261  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:47.137240  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:47.636908  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:48.136331  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:48.637640  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:49.136687  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:49.636242  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:50.136751  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:50.636957  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:51.136514  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:51.637471  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:52.137275  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:52.637302  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:53.136388  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:53.637046  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:54.137206  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:54.637225  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:55.136896  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:55.636482  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:56.138151  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:56.637198  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:57.136691  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:57.636251  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:58.136752  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:58.637409  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:59.137133  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:28:59.637029  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:29:00.136787  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:29:00.636974  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:29:01.136845  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	53833b9f91d37       dd54374d0ab14       7 minutes ago       Running             kubernetes-dashboard-auth   0                   7d2fdfdb8f4ca       kubernetes-dashboard-auth-69d44f85cb-ngqw8   kubernetes-dashboard
	9fd9fcb69c27d       6e38f40d628db       23 minutes ago      Running             storage-provisioner         2                   aae0cdf2362ad       storage-provisioner                          kube-system
	3cdafe1a0b3cd       3a975970da2f5       23 minutes ago      Running             proxy                       0                   dbe3fe0429042       kubernetes-dashboard-kong-78b7499b45-g6tgn   kubernetes-dashboard
	02f3bc754401e       3a975970da2f5       23 minutes ago      Exited              clear-stale-pid             0                   dbe3fe0429042       kubernetes-dashboard-kong-78b7499b45-g6tgn   kubernetes-dashboard
	708506e29f868       a0607af4fcd8a       24 minutes ago      Running             kubernetes-dashboard-api    0                   338c1a70100c2       kubernetes-dashboard-api-99557f86c-cwj8j     kubernetes-dashboard
	36f624d579187       4921d7a6dffa9       24 minutes ago      Running             kindnet-cni                 1                   4a03e4967de85       kindnet-zbmbl                                kube-system
	f72629902da50       56cc512116c8f       24 minutes ago      Running             busybox                     1                   9a062f6cd7419       busybox                                      default
	03619448b03a9       6e38f40d628db       24 minutes ago      Exited              storage-provisioner         1                   aae0cdf2362ad       storage-provisioner                          kube-system
	5262f26bad2b0       aa5e3ebc0dfed       24 minutes ago      Running             coredns                     1                   1ab134838ef92       coredns-7d764666f9-hm5hz                     kube-system
	6c79e07745b0b       af0321f3a4f38       24 minutes ago      Running             kube-proxy                  1                   1766b28c41e87       kube-proxy-xst8w                             kube-system
	cacf5e35e790a       73f80cdc073da       24 minutes ago      Running             kube-scheduler              1                   38931718ec045       kube-scheduler-no-preload-208281             kube-system
	fe48441f2b926       5032a56602e1b       24 minutes ago      Running             kube-controller-manager     1                   46efefa83a3c7       kube-controller-manager-no-preload-208281    kube-system
	e2b6de3f6ca9f       0a108f7189562       24 minutes ago      Running             etcd                        1                   7d8e57ec3badf       etcd-no-preload-208281                       kube-system
	496cc0b515ef5       58865405a13bc       24 minutes ago      Running             kube-apiserver              1                   2379cbb88b443       kube-apiserver-no-preload-208281             kube-system
	a698a9bb37123       56cc512116c8f       24 minutes ago      Exited              busybox                     0                   4abbdd6e11aa4       busybox                                      default
	0cbaba368082a       aa5e3ebc0dfed       24 minutes ago      Exited              coredns                     0                   46c9c96ac93e1       coredns-7d764666f9-hm5hz                     kube-system
	6bee3b8cfdfc0       4921d7a6dffa9       24 minutes ago      Exited              kindnet-cni                 0                   c90f21626354e       kindnet-zbmbl                                kube-system
	6647bd08b2c7d       af0321f3a4f38       25 minutes ago      Exited              kube-proxy                  0                   467f4389c2fa5       kube-proxy-xst8w                             kube-system
	0457ac1d0e6da       73f80cdc073da       25 minutes ago      Exited              kube-scheduler              0                   7e8fbfab6fa3e       kube-scheduler-no-preload-208281             kube-system
	7dd5f1a15d955       5032a56602e1b       25 minutes ago      Exited              kube-controller-manager     0                   9cda79d7fa6bc       kube-controller-manager-no-preload-208281    kube-system
	06cb2742e807f       58865405a13bc       25 minutes ago      Exited              kube-apiserver              0                   c9445c97a9ea5       kube-apiserver-no-preload-208281             kube-system
	ee999ba4f0b47       0a108f7189562       25 minutes ago      Exited              etcd                        0                   90bbfe2cbe82e       etcd-no-preload-208281                       kube-system
	
	
	==> containerd <==
	Dec 19 03:28:52 no-preload-208281 containerd[451]: time="2025-12-19T03:28:52.986373552Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-pode7d80d3e_7bf1_4e49_b7f9_c0911bbae20d.slice/cri-containerd-36f624d579187169ec514783619d4511851864690e5cc61baa5e5cafd8dc3d30.scope/hugetlb.1GB.events\""
	Dec 19 03:29:03 no-preload-208281 containerd[451]: time="2025-12-19T03:29:03.005105447Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24d16e46_3e1f_4d38_a486_8f15642946c7.slice/cri-containerd-6c79e07745b0b6a4cfbe2451c7c287765d16436fb7f8d8ae0bf0a5017b7b3e22.scope/hugetlb.2MB.events\""
	Dec 19 03:29:03 no-preload-208281 containerd[451]: time="2025-12-19T03:29:03.005420326Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24d16e46_3e1f_4d38_a486_8f15642946c7.slice/cri-containerd-6c79e07745b0b6a4cfbe2451c7c287765d16436fb7f8d8ae0bf0a5017b7b3e22.scope/hugetlb.1GB.events\""
	Dec 19 03:29:03 no-preload-208281 containerd[451]: time="2025-12-19T03:29:03.006706466Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59441d91_a2b7_4d87_86d1_5ccaaec4e398.slice/cri-containerd-5262f26bad2b02c527c0b40bd0ffbfc743349345eab765fb7a4a2dc9baa4a4f3.scope/hugetlb.2MB.events\""
	Dec 19 03:29:03 no-preload-208281 containerd[451]: time="2025-12-19T03:29:03.006877189Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59441d91_a2b7_4d87_86d1_5ccaaec4e398.slice/cri-containerd-5262f26bad2b02c527c0b40bd0ffbfc743349345eab765fb7a4a2dc9baa4a4f3.scope/hugetlb.1GB.events\""
	Dec 19 03:29:03 no-preload-208281 containerd[451]: time="2025-12-19T03:29:03.007956116Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1aebe15_3209_41e5_9992_da4b0690a286.slice/cri-containerd-f72629902da5012e1c73db0cc2fb5796d6949dd40d2d8d666241679a2eb11c14.scope/hugetlb.2MB.events\""
	Dec 19 03:29:03 no-preload-208281 containerd[451]: time="2025-12-19T03:29:03.008095525Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1aebe15_3209_41e5_9992_da4b0690a286.slice/cri-containerd-f72629902da5012e1c73db0cc2fb5796d6949dd40d2d8d666241679a2eb11c14.scope/hugetlb.1GB.events\""
	Dec 19 03:29:03 no-preload-208281 containerd[451]: time="2025-12-19T03:29:03.008978786Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode43ae2e7891eaa1ff806e636f311fb81.slice/cri-containerd-cacf5e35e790ac46df5f1339f8e234ed8ffd764c954e85131b71bce9fc1293aa.scope/hugetlb.2MB.events\""
	Dec 19 03:29:03 no-preload-208281 containerd[451]: time="2025-12-19T03:29:03.009092933Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode43ae2e7891eaa1ff806e636f311fb81.slice/cri-containerd-cacf5e35e790ac46df5f1339f8e234ed8ffd764c954e85131b71bce9fc1293aa.scope/hugetlb.1GB.events\""
	Dec 19 03:29:03 no-preload-208281 containerd[451]: time="2025-12-19T03:29:03.009993005Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93a9992ff7a9c41e489b493737b5b488.slice/cri-containerd-e2b6de3f6ca9fb6a4cf59ba73f641b81403ff210fb542805c0ecfb406f229a1a.scope/hugetlb.2MB.events\""
	Dec 19 03:29:03 no-preload-208281 containerd[451]: time="2025-12-19T03:29:03.010118828Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93a9992ff7a9c41e489b493737b5b488.slice/cri-containerd-e2b6de3f6ca9fb6a4cf59ba73f641b81403ff210fb542805c0ecfb406f229a1a.scope/hugetlb.1GB.events\""
	Dec 19 03:29:03 no-preload-208281 containerd[451]: time="2025-12-19T03:29:03.011061510Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5bab6e7d_150b_4c8e_ab0a_933ec046c863.slice/cri-containerd-9fd9fcb69c27dd803192f562a3233b3b5d43391dc2b3ad8eeb73ae2478a8ef20.scope/hugetlb.2MB.events\""
	Dec 19 03:29:03 no-preload-208281 containerd[451]: time="2025-12-19T03:29:03.011186223Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5bab6e7d_150b_4c8e_ab0a_933ec046c863.slice/cri-containerd-9fd9fcb69c27dd803192f562a3233b3b5d43391dc2b3ad8eeb73ae2478a8ef20.scope/hugetlb.1GB.events\""
	Dec 19 03:29:03 no-preload-208281 containerd[451]: time="2025-12-19T03:29:03.012200980Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9548d7ec_c85a_4cd0_8105_d4105327518f.slice/cri-containerd-708506e29f86833f15d101ae0ed3ecddeef0698196d03758d60545d772495dbb.scope/hugetlb.2MB.events\""
	Dec 19 03:29:03 no-preload-208281 containerd[451]: time="2025-12-19T03:29:03.012307629Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9548d7ec_c85a_4cd0_8105_d4105327518f.slice/cri-containerd-708506e29f86833f15d101ae0ed3ecddeef0698196d03758d60545d772495dbb.scope/hugetlb.1GB.events\""
	Dec 19 03:29:03 no-preload-208281 containerd[451]: time="2025-12-19T03:29:03.013234434Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-pode7d80d3e_7bf1_4e49_b7f9_c0911bbae20d.slice/cri-containerd-36f624d579187169ec514783619d4511851864690e5cc61baa5e5cafd8dc3d30.scope/hugetlb.2MB.events\""
	Dec 19 03:29:03 no-preload-208281 containerd[451]: time="2025-12-19T03:29:03.013355122Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-pode7d80d3e_7bf1_4e49_b7f9_c0911bbae20d.slice/cri-containerd-36f624d579187169ec514783619d4511851864690e5cc61baa5e5cafd8dc3d30.scope/hugetlb.1GB.events\""
	Dec 19 03:29:03 no-preload-208281 containerd[451]: time="2025-12-19T03:29:03.014283593Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podabb0b181_97ad_4854_b20a_0c327870fe32.slice/cri-containerd-3cdafe1a0b3cd2742862d68aeb352fb4b6954a0436e9bf279a9ef67a0d7e28a6.scope/hugetlb.2MB.events\""
	Dec 19 03:29:03 no-preload-208281 containerd[451]: time="2025-12-19T03:29:03.014400411Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podabb0b181_97ad_4854_b20a_0c327870fe32.slice/cri-containerd-3cdafe1a0b3cd2742862d68aeb352fb4b6954a0436e9bf279a9ef67a0d7e28a6.scope/hugetlb.1GB.events\""
	Dec 19 03:29:03 no-preload-208281 containerd[451]: time="2025-12-19T03:29:03.015640825Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31ee399e_d9d7_44cb_8ddb_ad815ecf728c.slice/cri-containerd-53833b9f91d370667647d4a3c1ed306beb7f4b9993b0f6721172fcc0056f1d08.scope/hugetlb.2MB.events\""
	Dec 19 03:29:03 no-preload-208281 containerd[451]: time="2025-12-19T03:29:03.015945723Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31ee399e_d9d7_44cb_8ddb_ad815ecf728c.slice/cri-containerd-53833b9f91d370667647d4a3c1ed306beb7f4b9993b0f6721172fcc0056f1d08.scope/hugetlb.1GB.events\""
	Dec 19 03:29:03 no-preload-208281 containerd[451]: time="2025-12-19T03:29:03.017079711Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod355754afcd0ce2d7bab6c853c60e836c.slice/cri-containerd-496cc0b515ef5f8631302beefecc3380ae6563f4ed57ee1a286e52a06f30249c.scope/hugetlb.2MB.events\""
	Dec 19 03:29:03 no-preload-208281 containerd[451]: time="2025-12-19T03:29:03.017190925Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod355754afcd0ce2d7bab6c853c60e836c.slice/cri-containerd-496cc0b515ef5f8631302beefecc3380ae6563f4ed57ee1a286e52a06f30249c.scope/hugetlb.1GB.events\""
	Dec 19 03:29:03 no-preload-208281 containerd[451]: time="2025-12-19T03:29:03.018130276Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80442131b1359e6657f2959b40f80467.slice/cri-containerd-fe48441f2b926fe3ab51c21e04ab3ca395718e082d30ebb97fb87169ccd0e569.scope/hugetlb.2MB.events\""
	Dec 19 03:29:03 no-preload-208281 containerd[451]: time="2025-12-19T03:29:03.018234752Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80442131b1359e6657f2959b40f80467.slice/cri-containerd-fe48441f2b926fe3ab51c21e04ab3ca395718e082d30ebb97fb87169ccd0e569.scope/hugetlb.1GB.events\""
	
	
	==> coredns [0cbaba368082a3f121bc09e60595d1ff592ec5796ecc2115579e6f149ade94d7] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:45603 - 16917 "HINFO IN 759710811400899281.7107360172383803948. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.035088679s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [5262f26bad2b02c527c0b40bd0ffbfc743349345eab765fb7a4a2dc9baa4a4f3] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:46213 - 43876 "HINFO IN 5483904004133871625.7627938750895341566. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.036456851s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-208281
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-208281
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=no-preload-208281
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T03_03_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 03:03:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-208281
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 03:29:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 03:28:53 +0000   Fri, 19 Dec 2025 03:03:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 03:28:53 +0000   Fri, 19 Dec 2025 03:03:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 03:28:53 +0000   Fri, 19 Dec 2025 03:03:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 03:28:53 +0000   Fri, 19 Dec 2025 03:04:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-208281
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                1c0e3333-d7dc-4f0f-825f-76ec9118fda3
	  Boot ID:                    a0dec9bb-d63c-4dc5-9036-bbcaf9f2c6be
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.2.0
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 coredns-7d764666f9-hm5hz                                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25m
	  kube-system                 etcd-no-preload-208281                                   100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         25m
	  kube-system                 kindnet-zbmbl                                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25m
	  kube-system                 kube-apiserver-no-preload-208281                         250m (3%)     0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 kube-controller-manager-no-preload-208281                200m (2%)     0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 kube-proxy-xst8w                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 kube-scheduler-no-preload-208281                         100m (1%)     0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 metrics-server-5d785b57d4-zgcxz                          100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         24m
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         25m
	  kubernetes-dashboard        kubernetes-dashboard-api-99557f86c-cwj8j                 100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     24m
	  kubernetes-dashboard        kubernetes-dashboard-auth-69d44f85cb-ngqw8               100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     24m
	  kubernetes-dashboard        kubernetes-dashboard-kong-78b7499b45-g6tgn               0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-594bbfb84b-2htgn    100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     24m
	  kubernetes-dashboard        kubernetes-dashboard-web-7f7574785f-mh44r                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (16%)  1100m (13%)
	  memory             1220Mi (3%)  1820Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  25m   node-controller  Node no-preload-208281 event: Registered Node no-preload-208281 in Controller
	  Normal  RegisteredNode  24m   node-controller  Node no-preload-208281 event: Registered Node no-preload-208281 in Controller
	
	
	==> dmesg <==
	[Dec19 01:17] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001886] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.395482] i8042: Warning: Keylock active
	[  +0.012710] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497460] block sda: the capability attribute has been deprecated.
	[  +0.080392] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.020963] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.499240] kauditd_printk_skb: 47 callbacks suppressed
	[Dec19 03:03] overlayfs: failed to resolve '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/29/fs': -2
	
	
	==> etcd [e2b6de3f6ca9fb6a4cf59ba73f641b81403ff210fb542805c0ecfb406f229a1a] <==
	{"level":"info","ts":"2025-12-19T03:04:51.137271Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-19T03:04:51.137296Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-19T03:04:51.137318Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-12-19T03:04:51.138300Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-19T03:04:51.138338Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-19T03:04:51.138361Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-12-19T03:04:51.138371Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-19T03:04:51.139075Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:no-preload-208281 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-19T03:04:51.139251Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-19T03:04:51.139298Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-19T03:04:51.140658Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-19T03:04:51.150478Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-19T03:04:51.152337Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-19T03:04:51.154001Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-19T03:04:51.159679Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-19T03:04:51.160172Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-19T03:14:51.185748Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1205}
	{"level":"info","ts":"2025-12-19T03:14:51.206638Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1205,"took":"20.48087ms","hash":2178082393,"current-db-size-bytes":4427776,"current-db-size":"4.4 MB","current-db-size-in-use-bytes":1986560,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2025-12-19T03:14:51.206700Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2178082393,"revision":1205,"compact-revision":-1}
	{"level":"info","ts":"2025-12-19T03:19:51.190458Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1471}
	{"level":"info","ts":"2025-12-19T03:19:51.193684Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1471,"took":"2.826795ms","hash":131670808,"current-db-size-bytes":4427776,"current-db-size":"4.4 MB","current-db-size-in-use-bytes":2269184,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2025-12-19T03:19:51.193728Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":131670808,"revision":1471,"compact-revision":1205}
	{"level":"info","ts":"2025-12-19T03:24:51.195667Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1730}
	{"level":"info","ts":"2025-12-19T03:24:51.198979Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1730,"took":"2.920383ms","hash":180743744,"current-db-size-bytes":4427776,"current-db-size":"4.4 MB","current-db-size-in-use-bytes":2240512,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2025-12-19T03:24:51.199020Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":180743744,"revision":1730,"compact-revision":1471}
	
	
	==> etcd [ee999ba4f0b47eadf10730be5384ab5f3b45f01128186cb9ab42b8df5c0b7400] <==
	{"level":"info","ts":"2025-12-19T03:03:51.907125Z","caller":"membership/cluster.go:424","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-19T03:03:52.092230Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-19T03:03:52.092325Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-19T03:03:52.092385Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-12-19T03:03:52.092400Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-19T03:03:52.092420Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-12-19T03:03:52.092901Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-19T03:03:52.092934Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-19T03:03:52.092956Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-12-19T03:03:52.092968Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-19T03:03:52.093621Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:no-preload-208281 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-19T03:03:52.093797Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-19T03:03:52.094157Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-19T03:03:52.094316Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-19T03:03:52.094753Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-19T03:03:52.094788Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-19T03:03:52.095692Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-19T03:03:52.096533Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-19T03:03:52.096740Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-19T03:03:52.097119Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-19T03:03:52.097293Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-19T03:03:52.097472Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-19T03:03:52.097708Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-19T03:03:52.100387Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-19T03:03:52.100520Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 03:29:04 up  2:11,  0 user,  load average: 0.89, 0.77, 2.46
	Linux no-preload-208281 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [36f624d579187169ec514783619d4511851864690e5cc61baa5e5cafd8dc3d30] <==
	I1219 03:27:04.674033       1 main.go:301] handling current node
	I1219 03:27:14.673814       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:27:14.673855       1 main.go:301] handling current node
	I1219 03:27:24.673912       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:27:24.673967       1 main.go:301] handling current node
	I1219 03:27:34.674182       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:27:34.674233       1 main.go:301] handling current node
	I1219 03:27:44.681550       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:27:44.681615       1 main.go:301] handling current node
	I1219 03:27:54.679930       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:27:54.679961       1 main.go:301] handling current node
	I1219 03:28:04.675390       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:28:04.675445       1 main.go:301] handling current node
	I1219 03:28:14.674507       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:28:14.674563       1 main.go:301] handling current node
	I1219 03:28:24.677781       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:28:24.677823       1 main.go:301] handling current node
	I1219 03:28:34.678628       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:28:34.678661       1 main.go:301] handling current node
	I1219 03:28:44.675640       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:28:44.675700       1 main.go:301] handling current node
	I1219 03:28:54.673652       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:28:54.673691       1 main.go:301] handling current node
	I1219 03:29:04.678705       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:29:04.678742       1 main.go:301] handling current node
	
	
	==> kindnet [6bee3b8cfdfc0cfbd9e189118939b5349e2c5e27938c7584f8e1081b62329aa5] <==
	I1219 03:04:06.051097       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1219 03:04:06.051366       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1219 03:04:06.051510       1 main.go:148] setting mtu 1500 for CNI 
	I1219 03:04:06.051536       1 main.go:178] kindnetd IP family: "ipv4"
	I1219 03:04:06.051565       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-19T03:04:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1219 03:04:06.349389       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1219 03:04:06.349419       1 controller.go:381] "Waiting for informer caches to sync"
	I1219 03:04:06.349429       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1219 03:04:06.349652       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1219 03:04:06.649804       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1219 03:04:06.649831       1 metrics.go:72] Registering metrics
	I1219 03:04:06.649914       1 controller.go:711] "Syncing nftables rules"
	I1219 03:04:16.349785       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:04:16.349843       1 main.go:301] handling current node
	I1219 03:04:26.350693       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:04:26.350742       1 main.go:301] handling current node
	
	
	==> kube-apiserver [06cb2742e807f0ab357f122a308cc5bad433f366487ddbbf67177cc3d6f74e2b] <==
	I1219 03:03:56.969284       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1219 03:04:01.627715       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1219 03:04:01.633710       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1219 03:04:01.720366       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1219 03:04:01.820488       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1219 03:04:01.820488       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1219 03:04:29.231885       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:35654: use of closed network connection
	I1219 03:04:29.919479       1 handler.go:304] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1219 03:04:29.924208       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:04:29.924345       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1219 03:04:29.924420       1 handler_proxy.go:143] error resolving kube-system/metrics-server: service "metrics-server" not found
	I1219 03:04:30.004960       1 alloc.go:329] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.107.180.11"}
	W1219 03:04:30.009324       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:04:30.009394       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1219 03:04:30.015660       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:04:30.015711       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	
	
	==> kube-apiserver [496cc0b515ef5f8631302beefecc3380ae6563f4ed57ee1a286e52a06f30249c] <==
	E1219 03:24:53.502065       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1219 03:24:53.502078       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1219 03:24:53.502088       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1219 03:24:53.503204       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:25:53.502569       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:25:53.502666       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1219 03:25:53.502682       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:25:53.503715       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:25:53.503841       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1219 03:25:53.503864       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:27:53.503473       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:27:53.503529       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1219 03:27:53.503545       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:27:53.504688       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:27:53.504789       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1219 03:27:53.504822       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [7dd5f1a15d9551e729b3a73b8dd9b36565eb7c597d451995076e1f6606444459] <==
	I1219 03:04:00.839117       1 shared_informer.go:377] "Caches are synced"
	I1219 03:04:00.837799       1 shared_informer.go:377] "Caches are synced"
	I1219 03:04:00.837808       1 shared_informer.go:377] "Caches are synced"
	I1219 03:04:00.839137       1 shared_informer.go:377] "Caches are synced"
	I1219 03:04:00.838124       1 shared_informer.go:377] "Caches are synced"
	I1219 03:04:00.837790       1 shared_informer.go:377] "Caches are synced"
	I1219 03:04:00.840052       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 03:04:00.838829       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1219 03:04:00.840148       1 shared_informer.go:377] "Caches are synced"
	I1219 03:04:00.840179       1 shared_informer.go:377] "Caches are synced"
	I1219 03:04:00.840208       1 shared_informer.go:377] "Caches are synced"
	I1219 03:04:00.840419       1 shared_informer.go:377] "Caches are synced"
	I1219 03:04:00.840520       1 range_allocator.go:177] "Sending events to api server"
	I1219 03:04:00.840621       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-208281"
	I1219 03:04:00.840694       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1219 03:04:00.840804       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1219 03:04:00.840892       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 03:04:00.840932       1 shared_informer.go:377] "Caches are synced"
	I1219 03:04:00.844066       1 shared_informer.go:377] "Caches are synced"
	I1219 03:04:00.851874       1 range_allocator.go:433] "Set node PodCIDR" node="no-preload-208281" podCIDRs=["10.244.0.0/24"]
	I1219 03:04:00.936090       1 shared_informer.go:377] "Caches are synced"
	I1219 03:04:00.936112       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1219 03:04:00.936119       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1219 03:04:00.941316       1 shared_informer.go:377] "Caches are synced"
	I1219 03:04:20.843624       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-controller-manager [fe48441f2b926fe3ab51c21e04ab3ca395718e082d30ebb97fb87169ccd0e569] <==
	I1219 03:22:57.082758       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:23:26.962766       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:23:27.092668       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:23:56.967176       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:23:57.100303       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:24:26.971252       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:24:27.107756       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:24:56.975730       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:24:57.116259       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:25:26.980341       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:25:27.124090       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:25:56.985576       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:25:57.132862       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:26:26.989854       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:26:27.141627       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:26:56.994675       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:26:57.150048       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:27:26.999614       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:27:27.157618       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:27:57.005730       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:27:57.164675       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:28:27.010455       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:28:27.172970       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:28:57.016033       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:28:57.180824       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [6647bd08b2c7d47c6eead48e5e683e5121b40b05c0fe31a31cf2329b794cf45e] <==
	I1219 03:04:02.669560       1 server_linux.go:53] "Using iptables proxy"
	I1219 03:04:02.763688       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 03:04:02.864741       1 shared_informer.go:377] "Caches are synced"
	I1219 03:04:02.864778       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1219 03:04:02.864887       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 03:04:02.895089       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1219 03:04:02.895152       1 server_linux.go:136] "Using iptables Proxier"
	I1219 03:04:02.901730       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:04:02.902597       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1219 03:04:02.902656       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:04:02.905212       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:04:02.905267       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:04:02.905299       1 config.go:200] "Starting service config controller"
	I1219 03:04:02.905503       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:04:02.905543       1 config.go:309] "Starting node config controller"
	I1219 03:04:02.905556       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:04:02.905343       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:04:02.905575       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 03:04:02.905613       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:04:03.005940       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 03:04:03.005960       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 03:04:03.006069       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [6c79e07745b0b6a4cfbe2451c7c287765d16436fb7f8d8ae0bf0a5017b7b3e22] <==
	I1219 03:04:53.926249       1 server_linux.go:53] "Using iptables proxy"
	I1219 03:04:54.009323       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 03:04:54.110296       1 shared_informer.go:377] "Caches are synced"
	I1219 03:04:54.110360       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1219 03:04:54.112201       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 03:04:54.144319       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1219 03:04:54.144539       1 server_linux.go:136] "Using iptables Proxier"
	I1219 03:04:54.152380       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:04:54.153129       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1219 03:04:54.153409       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:04:54.158402       1 config.go:200] "Starting service config controller"
	I1219 03:04:54.158424       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:04:54.158455       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:04:54.158461       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:04:54.158725       1 config.go:309] "Starting node config controller"
	I1219 03:04:54.158769       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:04:54.158802       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 03:04:54.159359       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:04:54.159389       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:04:54.259246       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 03:04:54.259286       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 03:04:54.259563       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [0457ac1d0e6da6c1cc69c3583af9c20d14f2274fefa0497dcdce311fa1b7a1d9] <==
	E1219 03:03:54.053347       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1219 03:03:54.053428       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1219 03:03:54.053431       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1219 03:03:54.053480       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1219 03:03:54.054473       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1219 03:03:54.054832       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1219 03:03:54.054918       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1219 03:03:54.930372       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1219 03:03:54.986215       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1219 03:03:55.014573       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1219 03:03:55.022953       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1219 03:03:55.060177       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1219 03:03:55.077789       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1219 03:03:55.092080       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1219 03:03:55.132455       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1219 03:03:55.168987       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1219 03:03:55.187464       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1219 03:03:55.195073       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1219 03:03:55.310657       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1219 03:03:55.319194       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1219 03:03:55.338053       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1219 03:03:55.339019       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1219 03:03:55.400888       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1219 03:03:55.505051       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	I1219 03:03:57.844169       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-scheduler [cacf5e35e790ac46df5f1339f8e234ed8ffd764c954e85131b71bce9fc1293aa] <==
	I1219 03:04:50.553894       1 serving.go:386] Generated self-signed cert in-memory
	W1219 03:04:52.470363       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1219 03:04:52.470518       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1219 03:04:52.470533       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1219 03:04:52.470542       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1219 03:04:52.503045       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1219 03:04:52.503075       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:04:52.505942       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:04:52.505985       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 03:04:52.506691       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1219 03:04:52.506775       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1219 03:04:52.606360       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 19 03:28:16 no-preload-208281 kubelet[575]: E1219 03:28:16.139288     575 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-5d785b57d4-zgcxz" podUID="743fe6aa-308c-4f80-b7f5-c753be058b69"
	Dec 19 03:28:18 no-preload-208281 kubelet[575]: E1219 03:28:18.138347     575 prober_manager.go:209] "Readiness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-g6tgn" containerName="proxy"
	Dec 19 03:28:23 no-preload-208281 kubelet[575]: E1219 03:28:23.138670     575 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-208281" containerName="etcd"
	Dec 19 03:28:23 no-preload-208281 kubelet[575]: E1219 03:28:23.138797     575 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-2htgn" containerName="kubernetes-dashboard-metrics-scraper"
	Dec 19 03:28:23 no-preload-208281 kubelet[575]: E1219 03:28:23.140445     575 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-metrics-scraper/manifests/sha256:5154b68252bd601cf85092b6413cb9db224af1ef89cb53009d2070dfccd30775: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-2htgn" podUID="55fab15b-99b5-48cb-8d53-90201e74b1a4"
	Dec 19 03:28:24 no-preload-208281 kubelet[575]: E1219 03:28:24.139387     575 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-web\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-web/manifests/sha256:cc7c31bd2d8470e3590dcb20fe980769b43054b31a5c5c0da606e9add898d85d: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-web-7f7574785f-mh44r" podUID="e17cc5d6-dcc5-48bb-8976-a7ad6acbc1e5"
	Dec 19 03:28:28 no-preload-208281 kubelet[575]: E1219 03:28:28.138537     575 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/metrics-server-5d785b57d4-zgcxz" containerName="metrics-server"
	Dec 19 03:28:28 no-preload-208281 kubelet[575]: E1219 03:28:28.139793     575 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-5d785b57d4-zgcxz" podUID="743fe6aa-308c-4f80-b7f5-c753be058b69"
	Dec 19 03:28:34 no-preload-208281 kubelet[575]: E1219 03:28:34.138717     575 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-2htgn" containerName="kubernetes-dashboard-metrics-scraper"
	Dec 19 03:28:34 no-preload-208281 kubelet[575]: E1219 03:28:34.139942     575 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-metrics-scraper/manifests/sha256:5154b68252bd601cf85092b6413cb9db224af1ef89cb53009d2070dfccd30775: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-2htgn" podUID="55fab15b-99b5-48cb-8d53-90201e74b1a4"
	Dec 19 03:28:38 no-preload-208281 kubelet[575]: E1219 03:28:38.139245     575 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-web\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-web/manifests/sha256:cc7c31bd2d8470e3590dcb20fe980769b43054b31a5c5c0da606e9add898d85d: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-web-7f7574785f-mh44r" podUID="e17cc5d6-dcc5-48bb-8976-a7ad6acbc1e5"
	Dec 19 03:28:40 no-preload-208281 kubelet[575]: E1219 03:28:40.138884     575 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/metrics-server-5d785b57d4-zgcxz" containerName="metrics-server"
	Dec 19 03:28:40 no-preload-208281 kubelet[575]: E1219 03:28:40.140137     575 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-5d785b57d4-zgcxz" podUID="743fe6aa-308c-4f80-b7f5-c753be058b69"
	Dec 19 03:28:41 no-preload-208281 kubelet[575]: E1219 03:28:41.137935     575 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-208281" containerName="kube-scheduler"
	Dec 19 03:28:49 no-preload-208281 kubelet[575]: E1219 03:28:49.138557     575 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-2htgn" containerName="kubernetes-dashboard-metrics-scraper"
	Dec 19 03:28:49 no-preload-208281 kubelet[575]: E1219 03:28:49.139828     575 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-metrics-scraper/manifests/sha256:5154b68252bd601cf85092b6413cb9db224af1ef89cb53009d2070dfccd30775: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-2htgn" podUID="55fab15b-99b5-48cb-8d53-90201e74b1a4"
	Dec 19 03:28:52 no-preload-208281 kubelet[575]: E1219 03:28:52.138744     575 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/metrics-server-5d785b57d4-zgcxz" containerName="metrics-server"
	Dec 19 03:28:52 no-preload-208281 kubelet[575]: E1219 03:28:52.139783     575 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-5d785b57d4-zgcxz" podUID="743fe6aa-308c-4f80-b7f5-c753be058b69"
	Dec 19 03:28:52 no-preload-208281 kubelet[575]: E1219 03:28:52.139984     575 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-web\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-web:1.7.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-web/manifests/sha256:cc7c31bd2d8470e3590dcb20fe980769b43054b31a5c5c0da606e9add898d85d: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-web-7f7574785f-mh44r" podUID="e17cc5d6-dcc5-48bb-8976-a7ad6acbc1e5"
	Dec 19 03:29:02 no-preload-208281 kubelet[575]: E1219 03:29:02.138049     575 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-208281" containerName="kube-apiserver"
	Dec 19 03:29:02 no-preload-208281 kubelet[575]: E1219 03:29:02.138162     575 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-2htgn" containerName="kubernetes-dashboard-metrics-scraper"
	Dec 19 03:29:02 no-preload-208281 kubelet[575]: E1219 03:29:02.138351     575 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-hm5hz" containerName="coredns"
	Dec 19 03:29:02 no-preload-208281 kubelet[575]: E1219 03:29:02.139328     575 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-metrics-scraper/manifests/sha256:5154b68252bd601cf85092b6413cb9db224af1ef89cb53009d2070dfccd30775: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-2htgn" podUID="55fab15b-99b5-48cb-8d53-90201e74b1a4"
	Dec 19 03:29:04 no-preload-208281 kubelet[575]: E1219 03:29:04.138474     575 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/metrics-server-5d785b57d4-zgcxz" containerName="metrics-server"
	Dec 19 03:29:04 no-preload-208281 kubelet[575]: E1219 03:29:04.140443     575 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-5d785b57d4-zgcxz" podUID="743fe6aa-308c-4f80-b7f5-c753be058b69"
	
	
	==> kubernetes-dashboard [53833b9f91d370667647d4a3c1ed306beb7f4b9993b0f6721172fcc0056f1d08] <==
	I1219 03:21:10.810642       1 main.go:34] "Starting Kubernetes Dashboard Auth" version="1.4.0"
	I1219 03:21:10.810712       1 init.go:49] Using in-cluster config
	I1219 03:21:10.810859       1 main.go:44] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> kubernetes-dashboard [708506e29f86833f15d101ae0ed3ecddeef0698196d03758d60545d772495dbb] <==
	E1219 03:17:01.829174       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:17:31.831895       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:18:01.835241       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:18:31.837889       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:19:01.841322       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:19:31.844048       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:20:01.846708       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:20:31.852446       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:21:01.855944       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:21:31.859637       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:22:01.862844       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:22:31.866083       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:23:01.869354       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:23:31.874777       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:24:01.879766       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:24:31.883542       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:25:01.887604       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:25:31.891431       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:26:01.894571       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:26:31.898195       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:27:01.901955       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:27:31.904721       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:28:01.907796       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:28:31.911108       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	E1219 03:29:01.914974       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [03619448b03a9d04f872cfb774d4d982be35c7d2aa5ec4e413e4952934532bf3] <==
	I1219 03:04:53.956199       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1219 03:05:23.958855       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [9fd9fcb69c27dd803192f562a3233b3b5d43391dc2b3ad8eeb73ae2478a8ef20] <==
	W1219 03:28:41.246488       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:43.249666       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:43.254418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:45.257645       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:45.263164       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:47.266169       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:47.270430       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:49.273931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:49.278155       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:51.281342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:51.285670       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:53.289532       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:53.293574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:55.297202       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:55.301674       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:57.304995       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:57.309332       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:59.314371       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:59.322684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:29:01.326400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:29:01.331052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:29:03.334875       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:29:03.340390       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:29:05.344262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:29:05.349188       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-208281 -n no-preload-208281
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-208281 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: metrics-server-5d785b57d4-zgcxz kubernetes-dashboard-metrics-scraper-594bbfb84b-2htgn kubernetes-dashboard-web-7f7574785f-mh44r
helpers_test.go:283: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context no-preload-208281 describe pod metrics-server-5d785b57d4-zgcxz kubernetes-dashboard-metrics-scraper-594bbfb84b-2htgn kubernetes-dashboard-web-7f7574785f-mh44r
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context no-preload-208281 describe pod metrics-server-5d785b57d4-zgcxz kubernetes-dashboard-metrics-scraper-594bbfb84b-2htgn kubernetes-dashboard-web-7f7574785f-mh44r: exit status 1 (63.326789ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5d785b57d4-zgcxz" not found
	Error from server (NotFound): pods "kubernetes-dashboard-metrics-scraper-594bbfb84b-2htgn" not found
	Error from server (NotFound): pods "kubernetes-dashboard-web-7f7574785f-mh44r" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context no-preload-208281 describe pod metrics-server-5d785b57d4-zgcxz kubernetes-dashboard-metrics-scraper-594bbfb84b-2htgn kubernetes-dashboard-web-7f7574785f-mh44r: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (543.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (543.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1219 03:21:19.190931  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-453239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:22:55.863069  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/addons-367973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:23:12.808673  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/addons-367973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-103644 -n default-k8s-diff-port-103644
start_stop_delete_test.go:285: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-12-19 03:29:11.836965477 +0000 UTC m=+3828.830089679
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-103644 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-103644 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (138.499088ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "dashboard-metrics-scraper" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-103644 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-103644
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-103644:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "11c02a7b93988dbed616b611ca923dcf4724191c862108f677f27c4139daa4c9",
	        "Created": "2025-12-19T03:03:44.128298232Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 574453,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-19T03:04:51.023922591Z",
	            "FinishedAt": "2025-12-19T03:04:49.466246851Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/11c02a7b93988dbed616b611ca923dcf4724191c862108f677f27c4139daa4c9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/11c02a7b93988dbed616b611ca923dcf4724191c862108f677f27c4139daa4c9/hostname",
	        "HostsPath": "/var/lib/docker/containers/11c02a7b93988dbed616b611ca923dcf4724191c862108f677f27c4139daa4c9/hosts",
	        "LogPath": "/var/lib/docker/containers/11c02a7b93988dbed616b611ca923dcf4724191c862108f677f27c4139daa4c9/11c02a7b93988dbed616b611ca923dcf4724191c862108f677f27c4139daa4c9-json.log",
	        "Name": "/default-k8s-diff-port-103644",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-103644:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-103644",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "11c02a7b93988dbed616b611ca923dcf4724191c862108f677f27c4139daa4c9",
	                "LowerDir": "/var/lib/docker/overlay2/910cd6f0965b719597314beaf97ec867b7fd2c394e2d54f04ca8b78b3b843d35-init/diff:/var/lib/docker/overlay2/68e8325308c9e4650215fd35d4b00e1f54e6ac5929641a1bc8ed2d512448afbd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/910cd6f0965b719597314beaf97ec867b7fd2c394e2d54f04ca8b78b3b843d35/merged",
	                "UpperDir": "/var/lib/docker/overlay2/910cd6f0965b719597314beaf97ec867b7fd2c394e2d54f04ca8b78b3b843d35/diff",
	                "WorkDir": "/var/lib/docker/overlay2/910cd6f0965b719597314beaf97ec867b7fd2c394e2d54f04ca8b78b3b843d35/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-103644",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-103644/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-103644",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-103644",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-103644",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "b38f0ca163c1f6cea6d42444c01079612af48be1fb169bb1f4a9bfd3afff4f26",
	            "SandboxKey": "/var/run/docker/netns/b38f0ca163c1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-103644": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b673923273edc2de2d190d760404bd86e1a35010cdce8800eb3623a9ac5b14fd",
	                    "EndpointID": "ad847978d54254c34f0d29018c8c8fef26d9735df8d4399060c7b0455bfafb6f",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "a6:4e:56:15:10:87",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-103644",
	                        "11c02a7b9398"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-103644 -n default-k8s-diff-port-103644
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-103644 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-103644 logs -n 25: (1.690778487s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───
──────────────────┐
	│ COMMAND │                                                                                                                           ARGS                                                                                                                           │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───
──────────────────┤
	│ start   │ -p embed-certs-536489 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-536489           │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:10 UTC │
	│ addons  │ enable dashboard -p no-preload-208281 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                             │ no-preload-208281            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p no-preload-208281 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-208281            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:10 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-103644 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ default-k8s-diff-port-103644 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p default-k8s-diff-port-103644 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-103644 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:11 UTC │
	│ image   │ old-k8s-version-002036 image list --format=json                                                                                                                                                                                                          │ old-k8s-version-002036       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:23 UTC │
	│ pause   │ -p old-k8s-version-002036 --alsologtostderr -v=1                                                                                                                                                                                                         │ old-k8s-version-002036       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:23 UTC │
	│ unpause │ -p old-k8s-version-002036 --alsologtostderr -v=1                                                                                                                                                                                                         │ old-k8s-version-002036       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:23 UTC │
	│ delete  │ -p old-k8s-version-002036                                                                                                                                                                                                                                │ old-k8s-version-002036       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:23 UTC │
	│ delete  │ -p old-k8s-version-002036                                                                                                                                                                                                                                │ old-k8s-version-002036       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:23 UTC │
	│ start   │ -p newest-cni-017890 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-017890            │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:23 UTC │
	│ addons  │ enable metrics-server -p newest-cni-017890 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                  │ newest-cni-017890            │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:23 UTC │
	│ stop    │ -p newest-cni-017890 --alsologtostderr -v=3                                                                                                                                                                                                              │ newest-cni-017890            │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:24 UTC │
	│ addons  │ enable dashboard -p newest-cni-017890 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                             │ newest-cni-017890            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ start   │ -p newest-cni-017890 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-017890            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │                     │
	│ image   │ embed-certs-536489 image list --format=json                                                                                                                                                                                                              │ embed-certs-536489           │ jenkins │ v1.37.0 │ 19 Dec 25 03:29 UTC │ 19 Dec 25 03:29 UTC │
	│ pause   │ -p embed-certs-536489 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-536489           │ jenkins │ v1.37.0 │ 19 Dec 25 03:29 UTC │ 19 Dec 25 03:29 UTC │
	│ unpause │ -p embed-certs-536489 --alsologtostderr -v=1                                                                                                                                                                                                             │ embed-certs-536489           │ jenkins │ v1.37.0 │ 19 Dec 25 03:29 UTC │ 19 Dec 25 03:29 UTC │
	│ delete  │ -p embed-certs-536489                                                                                                                                                                                                                                    │ embed-certs-536489           │ jenkins │ v1.37.0 │ 19 Dec 25 03:29 UTC │ 19 Dec 25 03:29 UTC │
	│ image   │ no-preload-208281 image list --format=json                                                                                                                                                                                                               │ no-preload-208281            │ jenkins │ v1.37.0 │ 19 Dec 25 03:29 UTC │ 19 Dec 25 03:29 UTC │
	│ pause   │ -p no-preload-208281 --alsologtostderr -v=1                                                                                                                                                                                                              │ no-preload-208281            │ jenkins │ v1.37.0 │ 19 Dec 25 03:29 UTC │ 19 Dec 25 03:29 UTC │
	│ delete  │ -p embed-certs-536489                                                                                                                                                                                                                                    │ embed-certs-536489           │ jenkins │ v1.37.0 │ 19 Dec 25 03:29 UTC │ 19 Dec 25 03:29 UTC │
	│ start   │ -p auto-289681 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd                                                                                                                            │ auto-289681                  │ jenkins │ v1.37.0 │ 19 Dec 25 03:29 UTC │                     │
	│ unpause │ -p no-preload-208281 --alsologtostderr -v=1                                                                                                                                                                                                              │ no-preload-208281            │ jenkins │ v1.37.0 │ 19 Dec 25 03:29 UTC │ 19 Dec 25 03:29 UTC │
	│ delete  │ -p no-preload-208281                                                                                                                                                                                                                                     │ no-preload-208281            │ jenkins │ v1.37.0 │ 19 Dec 25 03:29 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───
──────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 03:29:08
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 03:29:08.001903  605639 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:29:08.002195  605639 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:29:08.002205  605639 out.go:374] Setting ErrFile to fd 2...
	I1219 03:29:08.002209  605639 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:29:08.002454  605639 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-253859/.minikube/bin
	I1219 03:29:08.003096  605639 out.go:368] Setting JSON to false
	I1219 03:29:08.004486  605639 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7887,"bootTime":1766107061,"procs":369,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 03:29:08.004549  605639 start.go:143] virtualization: kvm guest
	I1219 03:29:08.006714  605639 out.go:179] * [auto-289681] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 03:29:08.007791  605639 notify.go:221] Checking for updates...
	I1219 03:29:08.007802  605639 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 03:29:08.008820  605639 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 03:29:08.010387  605639 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-253859/kubeconfig
	I1219 03:29:08.011366  605639 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-253859/.minikube
	I1219 03:29:08.012382  605639 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 03:29:08.013720  605639 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 03:29:08.015208  605639 config.go:182] Loaded profile config "default-k8s-diff-port-103644": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 03:29:08.015321  605639 config.go:182] Loaded profile config "newest-cni-017890": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1219 03:29:08.015409  605639 config.go:182] Loaded profile config "no-preload-208281": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1219 03:29:08.015493  605639 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 03:29:08.045305  605639 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 03:29:08.045402  605639 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:29:08.111560  605639 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-19 03:29:08.100169656 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:29:08.111740  605639 docker.go:319] overlay module found
	I1219 03:29:08.113522  605639 out.go:179] * Using the docker driver based on user configuration
	I1219 03:29:08.114634  605639 start.go:309] selected driver: docker
	I1219 03:29:08.114651  605639 start.go:928] validating driver "docker" against <nil>
	I1219 03:29:08.114673  605639 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 03:29:08.115300  605639 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:29:08.190760  605639 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-19 03:29:08.179689628 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:29:08.190955  605639 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1219 03:29:08.191197  605639 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:29:08.192702  605639 out.go:179] * Using Docker driver with root privileges
	I1219 03:29:08.193920  605639 cni.go:84] Creating CNI manager for ""
	I1219 03:29:08.194004  605639 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1219 03:29:08.194019  605639 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1219 03:29:08.194108  605639 start.go:353] cluster config:
	{Name:auto-289681 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:auto-289681 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:con
tainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:29:08.195267  605639 out.go:179] * Starting "auto-289681" primary control-plane node in "auto-289681" cluster
	I1219 03:29:08.196306  605639 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1219 03:29:08.197314  605639 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1219 03:29:08.198235  605639 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1219 03:29:08.198267  605639 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-253859/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-amd64.tar.lz4
	I1219 03:29:08.198277  605639 cache.go:65] Caching tarball of preloaded images
	I1219 03:29:08.198323  605639 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1219 03:29:08.198375  605639 preload.go:238] Found /home/jenkins/minikube-integration/22230-253859/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1219 03:29:08.198390  605639 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on containerd
	I1219 03:29:08.198475  605639 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/auto-289681/config.json ...
	I1219 03:29:08.198495  605639 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/auto-289681/config.json: {Name:mkd3d830e3140d794188f726246cd379a2a44648 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:29:08.219119  605639 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1219 03:29:08.219143  605639 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1219 03:29:08.219164  605639 cache.go:243] Successfully downloaded all kic artifacts
	I1219 03:29:08.219202  605639 start.go:360] acquireMachinesLock for auto-289681: {Name:mk5879a82dc48b41f8e9deaf01c4dc7b14f955ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:29:08.219325  605639 start.go:364] duration metric: took 99.933µs to acquireMachinesLock for "auto-289681"
	I1219 03:29:08.219354  605639 start.go:93] Provisioning new machine with config: &{Name:auto-289681 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:auto-289681 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1219 03:29:08.219463  605639 start.go:125] createHost starting for "" (driver="docker")
	I1219 03:29:06.637385  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:29:07.137027  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:29:07.637676  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:29:08.139074  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:29:08.637428  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:29:09.137384  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:29:09.637715  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:29:10.137100  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:29:10.637872  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:29:11.136638  597843 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                       ATTEMPT             POD ID              POD                                                    NAMESPACE
	ac0a0c539d898       59f642f485d26       13 minutes ago      Running             kubernetes-dashboard-web   0                   9bc0dd014fef8       kubernetes-dashboard-web-5c9f966b98-bngtm              kubernetes-dashboard
	0909564061f06       6e38f40d628db       23 minutes ago      Running             storage-provisioner        2                   5152a417532c2       storage-provisioner                                    kube-system
	a6e170e632275       4921d7a6dffa9       24 minutes ago      Running             kindnet-cni                1                   5849334da0dca       kindnet-vgs5z                                          kube-system
	47a843aefeca9       36eef8e07bdd6       24 minutes ago      Running             kube-proxy                 1                   042d932b9f0bc       kube-proxy-lgw6f                                       kube-system
	58c1d664efdd8       52546a367cc9e       24 minutes ago      Running             coredns                    1                   970053a95c619       coredns-66bc5c9577-86vsf                               kube-system
	b836d490b5796       6e38f40d628db       24 minutes ago      Exited              storage-provisioner        1                   5152a417532c2       storage-provisioner                                    kube-system
	a9f9dbf5e77fc       56cc512116c8f       24 minutes ago      Running             busybox                    1                   231308711d724       busybox                                                default
	19baa8a9717c2       5826b25d990d7       24 minutes ago      Running             kube-controller-manager    1                   6f1ccb334eeac       kube-controller-manager-default-k8s-diff-port-103644   kube-system
	c2591b42ec56d       aec12dadf56dd       24 minutes ago      Running             kube-scheduler             1                   0074e46caf6e4       kube-scheduler-default-k8s-diff-port-103644            kube-system
	a8858dc4fe6cf       aa27095f56193       24 minutes ago      Running             kube-apiserver             1                   6fa84325f8005       kube-apiserver-default-k8s-diff-port-103644            kube-system
	fc945986f0d8b       a3e246e9556e9       24 minutes ago      Running             etcd                       1                   84008f4f31b82       etcd-default-k8s-diff-port-103644                      kube-system
	cedf8929206c7       56cc512116c8f       24 minutes ago      Exited              busybox                    0                   37532e04fd49d       busybox                                                default
	36e5d694c8907       52546a367cc9e       24 minutes ago      Exited              coredns                    0                   07e3ff0e5bdf5       coredns-66bc5c9577-86vsf                               kube-system
	72384f1ad49d7       4921d7a6dffa9       24 minutes ago      Exited              kindnet-cni                0                   30b833d027ad1       kindnet-vgs5z                                          kube-system
	872846ec96d2d       36eef8e07bdd6       25 minutes ago      Exited              kube-proxy                 0                   fd1cacdcce013       kube-proxy-lgw6f                                       kube-system
	dd57b66fad064       aec12dadf56dd       25 minutes ago      Exited              kube-scheduler             0                   448f7bd23d9ce       kube-scheduler-default-k8s-diff-port-103644            kube-system
	ee8c252f3d8f4       5826b25d990d7       25 minutes ago      Exited              kube-controller-manager    0                   b546ea1d48bb1       kube-controller-manager-default-k8s-diff-port-103644   kube-system
	069eca43bbcc0       aa27095f56193       25 minutes ago      Exited              kube-apiserver             0                   b8db3828b19c8       kube-apiserver-default-k8s-diff-port-103644            kube-system
	49ae9ae966417       a3e246e9556e9       25 minutes ago      Exited              etcd                       0                   b8649ca1f26b0       etcd-default-k8s-diff-port-103644                      kube-system
	
	
	==> containerd <==
	Dec 19 03:29:00 default-k8s-diff-port-103644 containerd[447]: time="2025-12-19T03:29:00.967841652Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf12460c5_0196_4171_a44f_31b13af14f9f.slice/cri-containerd-0909564061f06a20d93762ff30a9ecf6d3d13e45691a15d168e1f4b7fa54779e.scope/hugetlb.1GB.events\""
	Dec 19 03:29:00 default-k8s-diff-port-103644 containerd[447]: time="2025-12-19T03:29:00.968689312Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58de521f_3998_43e8_a935_3a43f0a176f8.slice/cri-containerd-ac0a0c539d898c7ef6dd6eaa2cea6e791bcd2c0c1e36683cfa823d0028b3751c.scope/hugetlb.2MB.events\""
	Dec 19 03:29:00 default-k8s-diff-port-103644 containerd[447]: time="2025-12-19T03:29:00.968791249Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58de521f_3998_43e8_a935_3a43f0a176f8.slice/cri-containerd-ac0a0c539d898c7ef6dd6eaa2cea6e791bcd2c0c1e36683cfa823d0028b3751c.scope/hugetlb.1GB.events\""
	Dec 19 03:29:00 default-k8s-diff-port-103644 containerd[447]: time="2025-12-19T03:29:00.969611937Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod996cf4b38188d4b0d664648ad2102013.slice/cri-containerd-a8858dc4fe6cf1222bb421499d0aaa734e7a15be6fd8f63ebd4a4f91de0515c1.scope/hugetlb.2MB.events\""
	Dec 19 03:29:00 default-k8s-diff-port-103644 containerd[447]: time="2025-12-19T03:29:00.969747984Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod996cf4b38188d4b0d664648ad2102013.slice/cri-containerd-a8858dc4fe6cf1222bb421499d0aaa734e7a15be6fd8f63ebd4a4f91de0515c1.scope/hugetlb.1GB.events\""
	Dec 19 03:29:10 default-k8s-diff-port-103644 containerd[447]: time="2025-12-19T03:29:10.985179749Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod996cf4b38188d4b0d664648ad2102013.slice/cri-containerd-a8858dc4fe6cf1222bb421499d0aaa734e7a15be6fd8f63ebd4a4f91de0515c1.scope/hugetlb.2MB.events\""
	Dec 19 03:29:10 default-k8s-diff-port-103644 containerd[447]: time="2025-12-19T03:29:10.985301487Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod996cf4b38188d4b0d664648ad2102013.slice/cri-containerd-a8858dc4fe6cf1222bb421499d0aaa734e7a15be6fd8f63ebd4a4f91de0515c1.scope/hugetlb.1GB.events\""
	Dec 19 03:29:10 default-k8s-diff-port-103644 containerd[447]: time="2025-12-19T03:29:10.986305698Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50f4d1ce4fca33a4531f882f5fb97a4e.slice/cri-containerd-c2591b42ec56d4e729eb5050bae8a11599a1d848e2b407c779bd43019d38acd7.scope/hugetlb.2MB.events\""
	Dec 19 03:29:10 default-k8s-diff-port-103644 containerd[447]: time="2025-12-19T03:29:10.986430809Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50f4d1ce4fca33a4531f882f5fb97a4e.slice/cri-containerd-c2591b42ec56d4e729eb5050bae8a11599a1d848e2b407c779bd43019d38acd7.scope/hugetlb.1GB.events\""
	Dec 19 03:29:10 default-k8s-diff-port-103644 containerd[447]: time="2025-12-19T03:29:10.987170031Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97d1d235_bad2_4304_8138_0d5f860d9a2a.slice/cri-containerd-a9f9dbf5e77fc449643d926d72a65bfee72a213de581f62f436a89bf5abae44a.scope/hugetlb.2MB.events\""
	Dec 19 03:29:10 default-k8s-diff-port-103644 containerd[447]: time="2025-12-19T03:29:10.987252536Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97d1d235_bad2_4304_8138_0d5f860d9a2a.slice/cri-containerd-a9f9dbf5e77fc449643d926d72a65bfee72a213de581f62f436a89bf5abae44a.scope/hugetlb.1GB.events\""
	Dec 19 03:29:10 default-k8s-diff-port-103644 containerd[447]: time="2025-12-19T03:29:10.987976316Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2b924f3_ac71_431b_a3e6_f85f1e0b94e6.slice/cri-containerd-58c1d664efdd8684b61585de1ce35b1c3bc4e2857602c929dee1f70db16c68e0.scope/hugetlb.2MB.events\""
	Dec 19 03:29:10 default-k8s-diff-port-103644 containerd[447]: time="2025-12-19T03:29:10.988109783Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2b924f3_ac71_431b_a3e6_f85f1e0b94e6.slice/cri-containerd-58c1d664efdd8684b61585de1ce35b1c3bc4e2857602c929dee1f70db16c68e0.scope/hugetlb.1GB.events\""
	Dec 19 03:29:10 default-k8s-diff-port-103644 containerd[447]: time="2025-12-19T03:29:10.988985583Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-pod3a78062f_cab2_4e56_bc36_33ecf9505255.slice/cri-containerd-a6e170e632275e1120bb398e83b22120c4c7eb49866f53c50f5736a071087f45.scope/hugetlb.2MB.events\""
	Dec 19 03:29:10 default-k8s-diff-port-103644 containerd[447]: time="2025-12-19T03:29:10.989091568Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-pod3a78062f_cab2_4e56_bc36_33ecf9505255.slice/cri-containerd-a6e170e632275e1120bb398e83b22120c4c7eb49866f53c50f5736a071087f45.scope/hugetlb.1GB.events\""
	Dec 19 03:29:10 default-k8s-diff-port-103644 containerd[447]: time="2025-12-19T03:29:10.990123077Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac53bb8a0832eefbaa4a648be6aad901.slice/cri-containerd-19baa8a9717c29181666ffb3efdceea762fabc94b416863c5b53dc5a31a91c9c.scope/hugetlb.2MB.events\""
	Dec 19 03:29:10 default-k8s-diff-port-103644 containerd[447]: time="2025-12-19T03:29:10.990261182Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac53bb8a0832eefbaa4a648be6aad901.slice/cri-containerd-19baa8a9717c29181666ffb3efdceea762fabc94b416863c5b53dc5a31a91c9c.scope/hugetlb.1GB.events\""
	Dec 19 03:29:10 default-k8s-diff-port-103644 containerd[447]: time="2025-12-19T03:29:10.991201905Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b4461b1_0b30_427d_9e31_107cea049612.slice/cri-containerd-47a843aefeca97fb22cc246b51d4c45d4468c52e15b42a86d187a0f0219b93c1.scope/hugetlb.2MB.events\""
	Dec 19 03:29:10 default-k8s-diff-port-103644 containerd[447]: time="2025-12-19T03:29:10.991343820Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b4461b1_0b30_427d_9e31_107cea049612.slice/cri-containerd-47a843aefeca97fb22cc246b51d4c45d4468c52e15b42a86d187a0f0219b93c1.scope/hugetlb.1GB.events\""
	Dec 19 03:29:10 default-k8s-diff-port-103644 containerd[447]: time="2025-12-19T03:29:10.992254679Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4275d7c883d3f735b8de47264bc63415.slice/cri-containerd-fc945986f0d8b51799666a6eb31fb812b90a17842cc95aa6d352ef48b62f0652.scope/hugetlb.2MB.events\""
	Dec 19 03:29:10 default-k8s-diff-port-103644 containerd[447]: time="2025-12-19T03:29:10.992363005Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4275d7c883d3f735b8de47264bc63415.slice/cri-containerd-fc945986f0d8b51799666a6eb31fb812b90a17842cc95aa6d352ef48b62f0652.scope/hugetlb.1GB.events\""
	Dec 19 03:29:10 default-k8s-diff-port-103644 containerd[447]: time="2025-12-19T03:29:10.993092969Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf12460c5_0196_4171_a44f_31b13af14f9f.slice/cri-containerd-0909564061f06a20d93762ff30a9ecf6d3d13e45691a15d168e1f4b7fa54779e.scope/hugetlb.2MB.events\""
	Dec 19 03:29:10 default-k8s-diff-port-103644 containerd[447]: time="2025-12-19T03:29:10.993189927Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf12460c5_0196_4171_a44f_31b13af14f9f.slice/cri-containerd-0909564061f06a20d93762ff30a9ecf6d3d13e45691a15d168e1f4b7fa54779e.scope/hugetlb.1GB.events\""
	Dec 19 03:29:10 default-k8s-diff-port-103644 containerd[447]: time="2025-12-19T03:29:10.994047400Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58de521f_3998_43e8_a935_3a43f0a176f8.slice/cri-containerd-ac0a0c539d898c7ef6dd6eaa2cea6e791bcd2c0c1e36683cfa823d0028b3751c.scope/hugetlb.2MB.events\""
	Dec 19 03:29:10 default-k8s-diff-port-103644 containerd[447]: time="2025-12-19T03:29:10.994180761Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58de521f_3998_43e8_a935_3a43f0a176f8.slice/cri-containerd-ac0a0c539d898c7ef6dd6eaa2cea6e791bcd2c0c1e36683cfa823d0028b3751c.scope/hugetlb.1GB.events\""
	
	
	==> coredns [36e5d694c8907189486901b0aad40fae056b856f62180e718acb50ce029ecd0d] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38780 - 14298 "HINFO IN 3502738313717446473.3594976055449755558. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.04275935s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [58c1d664efdd8684b61585de1ce35b1c3bc4e2857602c929dee1f70db16c68e0] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39833 - 46591 "HINFO IN 7296903648635083896.2998695300198609950. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.062731195s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-103644
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-103644
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=default-k8s-diff-port-103644
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T03_04_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 03:04:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-103644
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 03:29:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 03:24:24 +0000   Fri, 19 Dec 2025 03:04:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 03:24:24 +0000   Fri, 19 Dec 2025 03:04:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 03:24:24 +0000   Fri, 19 Dec 2025 03:04:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 03:24:24 +0000   Fri, 19 Dec 2025 03:04:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    default-k8s-diff-port-103644
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                ecfbcbac-fe01-4091-9f52-5962521dd868
	  Boot ID:                    a0dec9bb-d63c-4dc5-9036-bbcaf9f2c6be
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.2.0
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 coredns-66bc5c9577-86vsf                                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25m
	  kube-system                 etcd-default-k8s-diff-port-103644                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         25m
	  kube-system                 kindnet-vgs5z                                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25m
	  kube-system                 kube-apiserver-default-k8s-diff-port-103644              250m (3%)     0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-103644     200m (2%)     0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 kube-proxy-lgw6f                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 kube-scheduler-default-k8s-diff-port-103644              100m (1%)     0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 metrics-server-746fcd58dc-tctv8                          100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         24m
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         25m
	  kubernetes-dashboard        kubernetes-dashboard-api-b9fbd5f9b-dpv56                 100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     24m
	  kubernetes-dashboard        kubernetes-dashboard-auth-85fbf6f9bb-jzn2l               100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     24m
	  kubernetes-dashboard        kubernetes-dashboard-kong-9849c64bd-k2snn                0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-7685fd8b77-jg975    100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     24m
	  kubernetes-dashboard        kubernetes-dashboard-web-5c9f966b98-bngtm                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (16%)  1100m (13%)
	  memory             1220Mi (3%)  1820Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 25m                kube-proxy       
	  Normal  Starting                 24m                kube-proxy       
	  Normal  NodeHasSufficientPID     25m                kubelet          Node default-k8s-diff-port-103644 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  25m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  25m                kubelet          Node default-k8s-diff-port-103644 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25m                kubelet          Node default-k8s-diff-port-103644 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 25m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           25m                node-controller  Node default-k8s-diff-port-103644 event: Registered Node default-k8s-diff-port-103644 in Controller
	  Normal  NodeReady                24m                kubelet          Node default-k8s-diff-port-103644 status is now: NodeReady
	  Normal  Starting                 24m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24m (x9 over 24m)  kubelet          Node default-k8s-diff-port-103644 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24m (x7 over 24m)  kubelet          Node default-k8s-diff-port-103644 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24m (x7 over 24m)  kubelet          Node default-k8s-diff-port-103644 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           24m                node-controller  Node default-k8s-diff-port-103644 event: Registered Node default-k8s-diff-port-103644 in Controller
	
	
	==> dmesg <==
	[Dec19 01:17] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001886] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.395482] i8042: Warning: Keylock active
	[  +0.012710] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497460] block sda: the capability attribute has been deprecated.
	[  +0.080392] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.020963] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.499240] kauditd_printk_skb: 47 callbacks suppressed
	[Dec19 03:03] overlayfs: failed to resolve '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/29/fs': -2
	
	
	==> etcd [49ae9ae9664179746d1cc4f0b2904783aa2c4b1e268ce918bf8eb4eec3c61233] <==
	{"level":"warn","ts":"2025-12-19T03:04:01.606669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:01.618756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:01.626680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:01.634379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:01.641184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:01.647980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:01.655487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:01.662426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:01.671215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:01.678050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:01.684701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:01.691898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:01.698702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:01.705237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:01.711908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:01.719836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:01.728041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:01.737149Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:01.745905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:01.753216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:01.760860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:01.779738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:01.787000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:01.794957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:01.851235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37240","server-name":"","error":"EOF"}
	
	
	==> etcd [fc945986f0d8b51799666a6eb31fb812b90a17842cc95aa6d352ef48b62f0652] <==
	{"level":"warn","ts":"2025-12-19T03:04:59.795473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:59.802792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:59.810499Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:59.881417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:03.695955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:03.722151Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:03.738895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:03.751177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:33.827689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:33.865441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:33.884527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:33.897782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:33.920481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:33.940127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:33.956785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46072","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-19T03:14:59.205849Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1294}
	{"level":"info","ts":"2025-12-19T03:14:59.227199Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1294,"took":"20.97744ms","hash":3936518112,"current-db-size-bytes":4775936,"current-db-size":"4.8 MB","current-db-size-in-use-bytes":2220032,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2025-12-19T03:14:59.227265Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3936518112,"revision":1294,"compact-revision":-1}
	{"level":"info","ts":"2025-12-19T03:19:59.209702Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1572}
	{"level":"info","ts":"2025-12-19T03:19:59.212262Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1572,"took":"2.190137ms","hash":2394319790,"current-db-size-bytes":4775936,"current-db-size":"4.8 MB","current-db-size-in-use-bytes":2347008,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2025-12-19T03:19:59.212295Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2394319790,"revision":1572,"compact-revision":1294}
	{"level":"info","ts":"2025-12-19T03:24:59.213975Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1846}
	{"level":"info","ts":"2025-12-19T03:24:59.219303Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1846,"took":"4.734658ms","hash":3534426356,"current-db-size-bytes":4775936,"current-db-size":"4.8 MB","current-db-size-in-use-bytes":2387968,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2025-12-19T03:24:59.219360Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3534426356,"revision":1846,"compact-revision":1572}
	{"level":"info","ts":"2025-12-19T03:29:11.968805Z","caller":"traceutil/trace.go:172","msg":"trace[789234229] transaction","detail":"{read_only:false; response_revision:2328; number_of_response:1; }","duration":"129.667721ms","start":"2025-12-19T03:29:11.839113Z","end":"2025-12-19T03:29:11.968781Z","steps":["trace[789234229] 'process raft request'  (duration: 64.231593ms)","trace[789234229] 'compare'  (duration: 65.194151ms)"],"step_count":2}
	
	
	==> kernel <==
	 03:29:13 up  2:11,  0 user,  load average: 1.30, 0.86, 2.47
	Linux default-k8s-diff-port-103644 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [72384f1ad49d78ec6df6f3d6d752884b9f63349eb0cf50bb752be26fdba3141d] <==
	I1219 03:04:14.534663       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1219 03:04:14.534971       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1219 03:04:14.535141       1 main.go:148] setting mtu 1500 for CNI 
	I1219 03:04:14.535168       1 main.go:178] kindnetd IP family: "ipv4"
	I1219 03:04:14.535205       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-19T03:04:14Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1219 03:04:14.754984       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1219 03:04:14.755030       1 controller.go:381] "Waiting for informer caches to sync"
	I1219 03:04:14.755058       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1219 03:04:14.755213       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1219 03:04:15.355607       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1219 03:04:15.355657       1 metrics.go:72] Registering metrics
	I1219 03:04:15.355726       1 controller.go:711] "Syncing nftables rules"
	I1219 03:04:24.830782       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:04:24.830850       1 main.go:301] handling current node
	I1219 03:04:34.830919       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:04:34.830978       1 main.go:301] handling current node
	
	
	==> kindnet [a6e170e632275e1120bb398e83b22120c4c7eb49866f53c50f5736a071087f45] <==
	I1219 03:27:12.094785       1 main.go:301] handling current node
	I1219 03:27:22.092327       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:27:22.092367       1 main.go:301] handling current node
	I1219 03:27:32.087561       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:27:32.087643       1 main.go:301] handling current node
	I1219 03:27:42.088682       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:27:42.088731       1 main.go:301] handling current node
	I1219 03:27:52.090073       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:27:52.090129       1 main.go:301] handling current node
	I1219 03:28:02.087635       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:28:02.087679       1 main.go:301] handling current node
	I1219 03:28:12.096208       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:28:12.096244       1 main.go:301] handling current node
	I1219 03:28:22.093097       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:28:22.093131       1 main.go:301] handling current node
	I1219 03:28:32.087719       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:28:32.087756       1 main.go:301] handling current node
	I1219 03:28:42.092562       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:28:42.092644       1 main.go:301] handling current node
	I1219 03:28:52.088681       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:28:52.088747       1 main.go:301] handling current node
	I1219 03:29:02.087433       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:29:02.087467       1 main.go:301] handling current node
	I1219 03:29:12.095682       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:29:12.095731       1 main.go:301] handling current node
	
	
	==> kube-apiserver [069eca43bbcc0eb20a0e387dc92839a2dc811dd9acdf65cf2e9fe7389f32d3cd] <==
	I1219 03:04:05.334655       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1219 03:04:05.344666       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1219 03:04:10.473953       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1219 03:04:10.478423       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1219 03:04:10.615157       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1219 03:04:10.620643       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1219 03:04:36.661994       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8444->192.168.94.1:35430: use of closed network connection
	I1219 03:04:37.342960       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1219 03:04:37.346399       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:04:37.346459       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1219 03:04:37.346512       1 handler_proxy.go:143] error resolving kube-system/metrics-server: service "metrics-server" not found
	I1219 03:04:37.415122       1 alloc.go:328] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.101.92.168"}
	W1219 03:04:37.420925       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:04:37.420991       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1219 03:04:37.426248       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:04:37.426304       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	
	
	==> kube-apiserver [a8858dc4fe6cf1222bb421499d0aaa734e7a15be6fd8f63ebd4a4f91de0515c1] <==
	E1219 03:25:01.394456       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1219 03:25:01.394474       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1219 03:25:01.394533       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1219 03:25:01.395669       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:26:01.394540       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:26:01.394611       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1219 03:26:01.394629       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:26:01.395898       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:26:01.395960       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1219 03:26:01.395972       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:28:01.394838       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:28:01.394900       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1219 03:28:01.394915       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:28:01.397053       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:28:01.397141       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1219 03:28:01.397154       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [19baa8a9717c29181666ffb3efdceea762fabc94b416863c5b53dc5a31a91c9c] <==
	I1219 03:23:05.301071       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:23:35.160283       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:23:35.310168       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:24:05.165973       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:24:05.317730       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:24:35.171212       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:24:35.324791       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:25:05.176103       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:25:05.331995       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:25:35.181354       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:25:35.339736       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:26:05.186450       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:26:05.347514       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:26:35.190375       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:26:35.355843       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:27:05.194719       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:27:05.363871       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:27:35.199771       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:27:35.372169       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:28:05.204566       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:28:05.379345       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:28:35.209659       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:28:35.387115       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:29:05.214951       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:29:05.396571       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-controller-manager [ee8c252f3d8f4aee54318214731e5386b3c089ad31c19108f2f01301f0698503] <==
	I1219 03:04:09.518750       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1219 03:04:09.518772       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1219 03:04:09.518714       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1219 03:04:09.518844       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1219 03:04:09.519026       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1219 03:04:09.520319       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1219 03:04:09.520450       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1219 03:04:09.521176       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1219 03:04:09.521198       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1219 03:04:09.522757       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1219 03:04:09.522886       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1219 03:04:09.522951       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1219 03:04:09.522995       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1219 03:04:09.523002       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1219 03:04:09.523032       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1219 03:04:09.525792       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1219 03:04:09.525891       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 03:04:09.531004       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-103644" podCIDRs=["10.244.0.0/24"]
	I1219 03:04:09.532016       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 03:04:09.534268       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1219 03:04:09.534391       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1219 03:04:09.534495       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-103644"
	I1219 03:04:09.534569       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1219 03:04:09.544075       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 03:04:29.536364       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [47a843aefeca97fb22cc246b51d4c45d4468c52e15b42a86d187a0f0219b93c1] <==
	I1219 03:05:01.517369       1 server_linux.go:53] "Using iptables proxy"
	I1219 03:05:01.589222       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1219 03:05:01.690045       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1219 03:05:01.690103       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1219 03:05:01.690217       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 03:05:01.722003       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1219 03:05:01.722073       1 server_linux.go:132] "Using iptables Proxier"
	I1219 03:05:01.730736       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:05:01.731726       1 server.go:527] "Version info" version="v1.34.3"
	I1219 03:05:01.731879       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:05:01.739465       1 config.go:200] "Starting service config controller"
	I1219 03:05:01.739484       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:05:01.739503       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:05:01.739507       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:05:01.739522       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:05:01.739526       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:05:01.739641       1 config.go:309] "Starting node config controller"
	I1219 03:05:01.739660       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:05:01.739669       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 03:05:01.840105       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1219 03:05:01.840105       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 03:05:01.840164       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [872846ec96d2d949d4a7f1a644718c7d0f80e0e28fcde0d7425648a2ffc89358] <==
	I1219 03:04:11.251855       1 server_linux.go:53] "Using iptables proxy"
	I1219 03:04:11.333860       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1219 03:04:11.434180       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1219 03:04:11.434222       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1219 03:04:11.434338       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 03:04:11.457457       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1219 03:04:11.457519       1 server_linux.go:132] "Using iptables Proxier"
	I1219 03:04:11.463613       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:04:11.464075       1 server.go:527] "Version info" version="v1.34.3"
	I1219 03:04:11.464128       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:04:11.465604       1 config.go:200] "Starting service config controller"
	I1219 03:04:11.465683       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:04:11.465703       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:04:11.465727       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:04:11.465758       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:04:11.465766       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:04:11.465808       1 config.go:309] "Starting node config controller"
	I1219 03:04:11.465820       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:04:11.565945       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 03:04:11.565947       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 03:04:11.565992       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1219 03:04:11.565947       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [c2591b42ec56d4e729eb5050bae8a11599a1d848e2b407c779bd43019d38acd7] <==
	I1219 03:04:59.843962       1 serving.go:386] Generated self-signed cert in-memory
	I1219 03:05:01.342682       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1219 03:05:01.342721       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:05:01.349758       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:05:01.349809       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:05:01.349953       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1219 03:05:01.349977       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1219 03:05:01.350067       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1219 03:05:01.350271       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1219 03:05:01.350665       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1219 03:05:01.350757       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1219 03:05:01.450325       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:05:01.450476       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1219 03:05:01.453128       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kube-scheduler [dd57b66fad064bb42b78fb88e0736406dd552174022391793749a16c31b46525] <==
	E1219 03:04:02.553057       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1219 03:04:02.553110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1219 03:04:02.553135       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1219 03:04:02.553343       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1219 03:04:02.553704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1219 03:04:02.553761       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1219 03:04:02.554155       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1219 03:04:02.554656       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1219 03:04:02.554658       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1219 03:04:02.554761       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1219 03:04:02.554859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1219 03:04:02.555197       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1219 03:04:03.436712       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1219 03:04:03.486188       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1219 03:04:03.545873       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1219 03:04:03.577212       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1219 03:04:03.612471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1219 03:04:03.655998       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1219 03:04:03.678451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1219 03:04:03.678451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1219 03:04:03.684113       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1219 03:04:03.733392       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1219 03:04:03.777812       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1219 03:04:03.848470       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1219 03:04:05.447006       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 19 03:28:10 default-k8s-diff-port-103644 kubelet[593]: E1219 03:28:10.494951     593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-auth\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-auth:1.4.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-auth:1.4.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-auth/manifests/sha256:53e9917898bf98ff2de91f7f9bdedd3545780eb3ac72158889ae031136e9eeff: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-auth-85fbf6f9bb-jzn2l" podUID="26a71141-ff39-402c-8a96-699c64278554"
	Dec 19 03:28:15 default-k8s-diff-port-103644 kubelet[593]: E1219 03:28:15.495194     593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"clear-stale-pid\" with ImagePullBackOff: \"Back-off pulling image \\\"kong:3.9\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/kong:3.9\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/kong/manifests/sha256:4379444ecfd82794b27de38a74ba540e8571683dfdfce74c8ecb4018f308fb29: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-k2snn" podUID="fa13a9b2-1403-45ab-a6ce-a4ca11c18da3"
	Dec 19 03:28:15 default-k8s-diff-port-103644 kubelet[593]: E1219 03:28:15.495194     593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-api\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-api/manifests/sha256:96a702cfd3399d9eba23b3d37b09f798a4f51fcd8c8dfa8552c7829ade9c4aff: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-api-b9fbd5f9b-dpv56" podUID="7d41d5bd-26f4-4810-b588-5a7f49565a91"
	Dec 19 03:28:16 default-k8s-diff-port-103644 kubelet[593]: E1219 03:28:16.495142     593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-tctv8" podUID="37ff7895-b382-407b-9032-56a428173579"
	Dec 19 03:28:19 default-k8s-diff-port-103644 kubelet[593]: E1219 03:28:19.497564     593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-metrics-scraper/manifests/sha256:5154b68252bd601cf85092b6413cb9db224af1ef89cb53009d2070dfccd30775: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-jg975" podUID="000469ce-a882-41a1-ae64-b4e77c5b0f26"
	Dec 19 03:28:25 default-k8s-diff-port-103644 kubelet[593]: E1219 03:28:25.495529     593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-auth\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-auth:1.4.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-auth:1.4.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-auth/manifests/sha256:53e9917898bf98ff2de91f7f9bdedd3545780eb3ac72158889ae031136e9eeff: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-auth-85fbf6f9bb-jzn2l" podUID="26a71141-ff39-402c-8a96-699c64278554"
	Dec 19 03:28:28 default-k8s-diff-port-103644 kubelet[593]: E1219 03:28:28.494837     593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"clear-stale-pid\" with ImagePullBackOff: \"Back-off pulling image \\\"kong:3.9\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/kong:3.9\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/kong/manifests/sha256:4379444ecfd82794b27de38a74ba540e8571683dfdfce74c8ecb4018f308fb29: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-k2snn" podUID="fa13a9b2-1403-45ab-a6ce-a4ca11c18da3"
	Dec 19 03:28:28 default-k8s-diff-port-103644 kubelet[593]: E1219 03:28:28.494892     593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-tctv8" podUID="37ff7895-b382-407b-9032-56a428173579"
	Dec 19 03:28:29 default-k8s-diff-port-103644 kubelet[593]: E1219 03:28:29.495596     593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-api\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-api/manifests/sha256:96a702cfd3399d9eba23b3d37b09f798a4f51fcd8c8dfa8552c7829ade9c4aff: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-api-b9fbd5f9b-dpv56" podUID="7d41d5bd-26f4-4810-b588-5a7f49565a91"
	Dec 19 03:28:34 default-k8s-diff-port-103644 kubelet[593]: E1219 03:28:34.495269     593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-metrics-scraper/manifests/sha256:5154b68252bd601cf85092b6413cb9db224af1ef89cb53009d2070dfccd30775: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-jg975" podUID="000469ce-a882-41a1-ae64-b4e77c5b0f26"
	Dec 19 03:28:36 default-k8s-diff-port-103644 kubelet[593]: E1219 03:28:36.495056     593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-auth\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-auth:1.4.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-auth:1.4.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-auth/manifests/sha256:53e9917898bf98ff2de91f7f9bdedd3545780eb3ac72158889ae031136e9eeff: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-auth-85fbf6f9bb-jzn2l" podUID="26a71141-ff39-402c-8a96-699c64278554"
	Dec 19 03:28:39 default-k8s-diff-port-103644 kubelet[593]: E1219 03:28:39.495175     593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"clear-stale-pid\" with ImagePullBackOff: \"Back-off pulling image \\\"kong:3.9\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/kong:3.9\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/kong/manifests/sha256:4379444ecfd82794b27de38a74ba540e8571683dfdfce74c8ecb4018f308fb29: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-k2snn" podUID="fa13a9b2-1403-45ab-a6ce-a4ca11c18da3"
	Dec 19 03:28:42 default-k8s-diff-port-103644 kubelet[593]: E1219 03:28:42.495658     593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-tctv8" podUID="37ff7895-b382-407b-9032-56a428173579"
	Dec 19 03:28:43 default-k8s-diff-port-103644 kubelet[593]: E1219 03:28:43.494942     593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-api\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-api/manifests/sha256:96a702cfd3399d9eba23b3d37b09f798a4f51fcd8c8dfa8552c7829ade9c4aff: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-api-b9fbd5f9b-dpv56" podUID="7d41d5bd-26f4-4810-b588-5a7f49565a91"
	Dec 19 03:28:45 default-k8s-diff-port-103644 kubelet[593]: E1219 03:28:45.495303     593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-metrics-scraper/manifests/sha256:5154b68252bd601cf85092b6413cb9db224af1ef89cb53009d2070dfccd30775: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-jg975" podUID="000469ce-a882-41a1-ae64-b4e77c5b0f26"
	Dec 19 03:28:50 default-k8s-diff-port-103644 kubelet[593]: E1219 03:28:50.495472     593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"clear-stale-pid\" with ImagePullBackOff: \"Back-off pulling image \\\"kong:3.9\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/kong:3.9\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/kong/manifests/sha256:4379444ecfd82794b27de38a74ba540e8571683dfdfce74c8ecb4018f308fb29: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-k2snn" podUID="fa13a9b2-1403-45ab-a6ce-a4ca11c18da3"
	Dec 19 03:28:51 default-k8s-diff-port-103644 kubelet[593]: E1219 03:28:51.495128     593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-auth\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-auth:1.4.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-auth:1.4.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-auth/manifests/sha256:53e9917898bf98ff2de91f7f9bdedd3545780eb3ac72158889ae031136e9eeff: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-auth-85fbf6f9bb-jzn2l" podUID="26a71141-ff39-402c-8a96-699c64278554"
	Dec 19 03:28:54 default-k8s-diff-port-103644 kubelet[593]: E1219 03:28:54.495756     593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-tctv8" podUID="37ff7895-b382-407b-9032-56a428173579"
	Dec 19 03:28:57 default-k8s-diff-port-103644 kubelet[593]: E1219 03:28:57.495296     593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-api\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-api/manifests/sha256:96a702cfd3399d9eba23b3d37b09f798a4f51fcd8c8dfa8552c7829ade9c4aff: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-api-b9fbd5f9b-dpv56" podUID="7d41d5bd-26f4-4810-b588-5a7f49565a91"
	Dec 19 03:28:58 default-k8s-diff-port-103644 kubelet[593]: E1219 03:28:58.495503     593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-metrics-scraper/manifests/sha256:5154b68252bd601cf85092b6413cb9db224af1ef89cb53009d2070dfccd30775: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-jg975" podUID="000469ce-a882-41a1-ae64-b4e77c5b0f26"
	Dec 19 03:29:02 default-k8s-diff-port-103644 kubelet[593]: E1219 03:29:02.495606     593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"clear-stale-pid\" with ImagePullBackOff: \"Back-off pulling image \\\"kong:3.9\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/kong:3.9\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/kong/manifests/sha256:4379444ecfd82794b27de38a74ba540e8571683dfdfce74c8ecb4018f308fb29: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-k2snn" podUID="fa13a9b2-1403-45ab-a6ce-a4ca11c18da3"
	Dec 19 03:29:06 default-k8s-diff-port-103644 kubelet[593]: E1219 03:29:06.495212     593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-auth\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-auth:1.4.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-auth:1.4.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-auth/manifests/sha256:53e9917898bf98ff2de91f7f9bdedd3545780eb3ac72158889ae031136e9eeff: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-auth-85fbf6f9bb-jzn2l" podUID="26a71141-ff39-402c-8a96-699c64278554"
	Dec 19 03:29:06 default-k8s-diff-port-103644 kubelet[593]: E1219 03:29:06.495222     593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-tctv8" podUID="37ff7895-b382-407b-9032-56a428173579"
	Dec 19 03:29:08 default-k8s-diff-port-103644 kubelet[593]: E1219 03:29:08.495262     593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-api\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-api:1.14.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-api/manifests/sha256:96a702cfd3399d9eba23b3d37b09f798a4f51fcd8c8dfa8552c7829ade9c4aff: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-api-b9fbd5f9b-dpv56" podUID="7d41d5bd-26f4-4810-b588-5a7f49565a91"
	Dec 19 03:29:11 default-k8s-diff-port-103644 kubelet[593]: E1219 03:29:11.495259     593 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard-metrics-scraper/manifests/sha256:5154b68252bd601cf85092b6413cb9db224af1ef89cb53009d2070dfccd30775: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-jg975" podUID="000469ce-a882-41a1-ae64-b4e77c5b0f26"
	
	
	==> kubernetes-dashboard [ac0a0c539d898c7ef6dd6eaa2cea6e791bcd2c0c1e36683cfa823d0028b3751c] <==
	I1219 03:16:04.732516       1 main.go:37] "Starting Kubernetes Dashboard Web" version="1.7.0"
	I1219 03:16:04.732599       1 init.go:48] Using in-cluster config
	I1219 03:16:04.732829       1 main.go:57] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> storage-provisioner [0909564061f06a20d93762ff30a9ecf6d3d13e45691a15d168e1f4b7fa54779e] <==
	W1219 03:28:49.408195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:51.411538       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:51.415820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:53.418941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:53.423136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:55.426156       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:55.430554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:57.433465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:57.438430       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:59.441021       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:28:59.445029       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:29:01.448950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:29:01.454494       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:29:03.457976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:29:03.462400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:29:05.466001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:29:05.472643       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:29:07.476901       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:29:07.481726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:29:09.485573       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:29:09.492399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:29:11.495710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:29:11.500870       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:29:13.504500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:29:13.511227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [b836d490b57969a785b22ae7a7c6bfd0c9e0d003c578aa06dbd2415b8ef44317] <==
	I1219 03:05:01.336234       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1219 03:05:31.340069       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-103644 -n default-k8s-diff-port-103644
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-103644 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: metrics-server-746fcd58dc-tctv8 kubernetes-dashboard-api-b9fbd5f9b-dpv56 kubernetes-dashboard-auth-85fbf6f9bb-jzn2l kubernetes-dashboard-kong-9849c64bd-k2snn kubernetes-dashboard-metrics-scraper-7685fd8b77-jg975
helpers_test.go:283: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context default-k8s-diff-port-103644 describe pod metrics-server-746fcd58dc-tctv8 kubernetes-dashboard-api-b9fbd5f9b-dpv56 kubernetes-dashboard-auth-85fbf6f9bb-jzn2l kubernetes-dashboard-kong-9849c64bd-k2snn kubernetes-dashboard-metrics-scraper-7685fd8b77-jg975
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-103644 describe pod metrics-server-746fcd58dc-tctv8 kubernetes-dashboard-api-b9fbd5f9b-dpv56 kubernetes-dashboard-auth-85fbf6f9bb-jzn2l kubernetes-dashboard-kong-9849c64bd-k2snn kubernetes-dashboard-metrics-scraper-7685fd8b77-jg975: exit status 1 (90.838744ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-tctv8" not found
	Error from server (NotFound): pods "kubernetes-dashboard-api-b9fbd5f9b-dpv56" not found
	Error from server (NotFound): pods "kubernetes-dashboard-auth-85fbf6f9bb-jzn2l" not found
	Error from server (NotFound): pods "kubernetes-dashboard-kong-9849c64bd-k2snn" not found
	Error from server (NotFound): pods "kubernetes-dashboard-metrics-scraper-7685fd8b77-jg975" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context default-k8s-diff-port-103644 describe pod metrics-server-746fcd58dc-tctv8 kubernetes-dashboard-api-b9fbd5f9b-dpv56 kubernetes-dashboard-auth-85fbf6f9bb-jzn2l kubernetes-dashboard-kong-9849c64bd-k2snn kubernetes-dashboard-metrics-scraper-7685fd8b77-jg975: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (543.25s)

                                                
                                    

Test pass (377/420)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 19.51
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.24
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.16
12 TestDownloadOnly/v1.34.3/json-events 10.8
13 TestDownloadOnly/v1.34.3/preload-exists 0
17 TestDownloadOnly/v1.34.3/LogsDuration 0.08
18 TestDownloadOnly/v1.34.3/DeleteAll 0.22
19 TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds 0.21
21 TestDownloadOnly/v1.35.0-rc.1/json-events 11.81
22 TestDownloadOnly/v1.35.0-rc.1/preload-exists 0
26 TestDownloadOnly/v1.35.0-rc.1/LogsDuration 0.08
27 TestDownloadOnly/v1.35.0-rc.1/DeleteAll 0.23
28 TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds 0.15
29 TestDownloadOnlyKic 0.41
30 TestBinaryMirror 0.83
31 TestOffline 59.21
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 123.81
38 TestAddons/serial/Volcano 43.07
40 TestAddons/serial/GCPAuth/Namespaces 0.12
41 TestAddons/serial/GCPAuth/FakeCredentials 10.47
44 TestAddons/parallel/Registry 18.33
45 TestAddons/parallel/RegistryCreds 0.68
46 TestAddons/parallel/Ingress 20.11
47 TestAddons/parallel/InspektorGadget 10.76
48 TestAddons/parallel/MetricsServer 6.7
50 TestAddons/parallel/CSI 51.13
51 TestAddons/parallel/Headlamp 21.56
52 TestAddons/parallel/CloudSpanner 5.55
53 TestAddons/parallel/LocalPath 12.16
54 TestAddons/parallel/NvidiaDevicePlugin 5.55
55 TestAddons/parallel/Yakd 10.73
56 TestAddons/parallel/AmdGpuDevicePlugin 5.52
57 TestAddons/StoppedEnableDisable 12.61
58 TestCertOptions 28.69
59 TestCertExpiration 212.64
61 TestForceSystemdFlag 27.07
62 TestForceSystemdEnv 40.11
63 TestDockerEnvContainerd 37.99
67 TestErrorSpam/setup 19.3
68 TestErrorSpam/start 0.67
69 TestErrorSpam/status 0.97
70 TestErrorSpam/pause 1.47
71 TestErrorSpam/unpause 1.54
72 TestErrorSpam/stop 2.14
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 42.48
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 5.96
79 TestFunctional/serial/KubeContext 0.05
80 TestFunctional/serial/KubectlGetPods 0.08
83 TestFunctional/serial/CacheCmd/cache/add_remote 2.57
84 TestFunctional/serial/CacheCmd/cache/add_local 2.09
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
86 TestFunctional/serial/CacheCmd/cache/list 0.07
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.6
89 TestFunctional/serial/CacheCmd/cache/delete 0.14
90 TestFunctional/serial/MinikubeKubectlCmd 0.13
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
92 TestFunctional/serial/ExtraConfig 39.17
93 TestFunctional/serial/ComponentHealth 0.07
94 TestFunctional/serial/LogsCmd 1.27
95 TestFunctional/serial/LogsFileCmd 1.29
96 TestFunctional/serial/InvalidService 4.76
98 TestFunctional/parallel/ConfigCmd 0.48
100 TestFunctional/parallel/DryRun 0.47
101 TestFunctional/parallel/InternationalLanguage 0.19
102 TestFunctional/parallel/StatusCmd 1.15
106 TestFunctional/parallel/ServiceCmdConnect 22.81
107 TestFunctional/parallel/AddonsCmd 0.19
108 TestFunctional/parallel/PersistentVolumeClaim 35.57
110 TestFunctional/parallel/SSHCmd 0.62
111 TestFunctional/parallel/CpCmd 1.94
112 TestFunctional/parallel/MySQL 39.08
113 TestFunctional/parallel/FileSync 0.33
114 TestFunctional/parallel/CertSync 1.85
118 TestFunctional/parallel/NodeLabels 0.07
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.59
122 TestFunctional/parallel/License 0.5
123 TestFunctional/parallel/ServiceCmd/DeployApp 9.19
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.49
125 TestFunctional/parallel/ProfileCmd/profile_list 0.48
126 TestFunctional/parallel/MountCmd/any-port 7.79
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.46
128 TestFunctional/parallel/Version/short 0.08
129 TestFunctional/parallel/Version/components 0.64
130 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
131 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
132 TestFunctional/parallel/ImageCommands/ImageListJson 0.3
133 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
134 TestFunctional/parallel/ImageCommands/ImageBuild 4.28
135 TestFunctional/parallel/ImageCommands/Setup 1.92
136 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.18
137 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.05
138 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.05
139 TestFunctional/parallel/MountCmd/specific-port 1.94
140 TestFunctional/parallel/ServiceCmd/List 1.75
141 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.37
142 TestFunctional/parallel/ImageCommands/ImageRemove 0.49
143 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.82
144 TestFunctional/parallel/MountCmd/VerifyCleanup 1.99
145 TestFunctional/parallel/ServiceCmd/JSONOutput 1.87
146 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.44
147 TestFunctional/parallel/ServiceCmd/HTTPS 0.41
148 TestFunctional/parallel/ServiceCmd/Format 0.38
150 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.43
151 TestFunctional/parallel/ServiceCmd/URL 0.39
152 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
154 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 23.24
155 TestFunctional/parallel/UpdateContextCmd/no_changes 0.18
156 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
157 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.66
158 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
159 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
163 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy 36.69
172 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog 0
173 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart 6.06
174 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext 0.05
175 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods 0.07
178 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote 2.48
179 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local 2.02
180 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete 0.07
181 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list 0.07
182 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node 0.3
183 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload 1.58
184 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete 0.13
185 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd 0.13
186 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly 0.12
187 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig 41.97
188 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth 0.07
189 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd 1.32
190 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd 1.38
191 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService 4.56
193 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd 0.52
195 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun 0.46
196 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage 0.26
197 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd 1.21
201 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect 9.86
202 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd 0.17
203 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim 27.56
205 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd 0.61
206 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd 1.95
207 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL 31.65
208 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync 0.31
209 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync 1.91
213 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels 0.06
215 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled 0.64
217 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License 0.4
218 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp 8.2
219 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short 0.07
220 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components 0.52
221 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort 0.23
222 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable 0.26
223 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson 0.24
224 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml 0.24
225 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild 3.97
226 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup 0.97
227 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes 0.15
228 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster 0.15
229 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters 0.15
230 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon 1.22
231 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon 1.13
233 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel 0.43
234 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon 2.45
235 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel 0
237 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup 19.21
238 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile 0.38
239 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove 0.55
240 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile 0.75
241 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List 0.58
242 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon 0.46
243 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput 0.58
244 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS 0.41
245 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format 0.41
246 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL 0.42
247 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port 13.21
248 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port 2.05
249 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
250 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect 0
254 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel 0.11
255 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create 0.45
256 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list 0.44
257 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output 0.45
258 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup 1.94
259 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images 0.04
260 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image 0.02
261 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images 0.02
265 TestMultiControlPlane/serial/StartCluster 97.45
266 TestMultiControlPlane/serial/DeployApp 6.1
267 TestMultiControlPlane/serial/PingHostFromPods 1.19
268 TestMultiControlPlane/serial/AddWorkerNode 27.1
269 TestMultiControlPlane/serial/NodeLabels 0.07
270 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.93
271 TestMultiControlPlane/serial/CopyFile 18.05
272 TestMultiControlPlane/serial/StopSecondaryNode 12.76
273 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.74
274 TestMultiControlPlane/serial/RestartSecondaryNode 8.74
275 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.94
276 TestMultiControlPlane/serial/RestartClusterKeepsNodes 96.05
277 TestMultiControlPlane/serial/DeleteSecondaryNode 9.62
278 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.74
279 TestMultiControlPlane/serial/StopCluster 36.23
280 TestMultiControlPlane/serial/RestartCluster 51.26
281 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.72
282 TestMultiControlPlane/serial/AddSecondaryNode 39.02
283 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.93
288 TestJSONOutput/start/Command 41.35
289 TestJSONOutput/start/Audit 0
291 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
292 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
294 TestJSONOutput/pause/Command 0.76
295 TestJSONOutput/pause/Audit 0
297 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
298 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
300 TestJSONOutput/unpause/Command 0.62
301 TestJSONOutput/unpause/Audit 0
303 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
304 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
306 TestJSONOutput/stop/Command 5.87
307 TestJSONOutput/stop/Audit 0
309 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
310 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
311 TestErrorJSONOutput 0.23
313 TestKicCustomNetwork/create_custom_network 33.56
314 TestKicCustomNetwork/use_default_bridge_network 22.54
315 TestKicExistingNetwork 23.39
316 TestKicCustomSubnet 23.09
317 TestKicStaticIP 23.65
318 TestMainNoArgs 0.06
319 TestMinikubeProfile 48.49
322 TestMountStart/serial/StartWithMountFirst 4.5
323 TestMountStart/serial/VerifyMountFirst 0.28
324 TestMountStart/serial/StartWithMountSecond 7.41
325 TestMountStart/serial/VerifyMountSecond 0.28
326 TestMountStart/serial/DeleteFirst 1.68
327 TestMountStart/serial/VerifyMountPostDelete 0.27
328 TestMountStart/serial/Stop 1.26
329 TestMountStart/serial/RestartStopped 8
330 TestMountStart/serial/VerifyMountPostStop 0.29
333 TestMultiNode/serial/FreshStart2Nodes 68.23
334 TestMultiNode/serial/DeployApp2Nodes 4.71
335 TestMultiNode/serial/PingHostFrom2Pods 0.84
336 TestMultiNode/serial/AddNode 26.19
337 TestMultiNode/serial/MultiNodeLabels 0.07
338 TestMultiNode/serial/ProfileList 0.69
339 TestMultiNode/serial/CopyFile 10.15
340 TestMultiNode/serial/StopNode 2.3
341 TestMultiNode/serial/StartAfterStop 7.01
342 TestMultiNode/serial/RestartKeepsNodes 69.18
343 TestMultiNode/serial/DeleteNode 5.36
344 TestMultiNode/serial/StopMultiNode 24.07
345 TestMultiNode/serial/RestartMultiNode 44.28
346 TestMultiNode/serial/ValidateNameConflict 24.71
351 TestPreload 109.63
353 TestScheduledStopUnix 95.49
356 TestInsufficientStorage 11.65
357 TestRunningBinaryUpgrade 298.38
359 TestKubernetesUpgrade 307.26
360 TestMissingContainerUpgrade 91.19
362 TestStoppedBinaryUpgrade/Setup 3.76
363 TestPause/serial/Start 56.43
364 TestStoppedBinaryUpgrade/Upgrade 327.77
365 TestPause/serial/SecondStartNoReconfiguration 6.3
373 TestPause/serial/Pause 0.79
374 TestPause/serial/VerifyStatus 0.47
375 TestPause/serial/Unpause 0.97
376 TestPause/serial/PauseAgain 0.93
377 TestPause/serial/DeletePaused 4.64
378 TestPause/serial/VerifyDeletedResources 18.87
379 TestStoppedBinaryUpgrade/MinikubeLogs 2.56
381 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
382 TestNoKubernetes/serial/StartWithK8s 26.08
386 TestNoKubernetes/serial/StartWithStopK8s 8.87
391 TestNetworkPlugins/group/false 4.61
395 TestNoKubernetes/serial/Start 4.88
396 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
397 TestNoKubernetes/serial/VerifyK8sNotRunning 0.4
399 TestStartStop/group/old-k8s-version/serial/FirstStart 57.08
400 TestNoKubernetes/serial/ProfileList 9.87
401 TestNoKubernetes/serial/Stop 2.1
402 TestNoKubernetes/serial/StartNoArgs 7.39
404 TestStartStop/group/no-preload/serial/FirstStart 52.59
406 TestStartStop/group/embed-certs/serial/FirstStart 45.04
407 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.41
409 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 50.58
410 TestStartStop/group/old-k8s-version/serial/DeployApp 10.37
411 TestStartStop/group/embed-certs/serial/DeployApp 9.27
412 TestStartStop/group/no-preload/serial/DeployApp 9.27
413 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.98
414 TestStartStop/group/old-k8s-version/serial/Stop 12.08
415 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.88
416 TestStartStop/group/embed-certs/serial/Stop 12.09
417 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.26
418 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.89
419 TestStartStop/group/no-preload/serial/Stop 12.17
420 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
421 TestStartStop/group/old-k8s-version/serial/SecondStart 48.18
422 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.87
423 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.44
424 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.25
425 TestStartStop/group/embed-certs/serial/SecondStart 375.51
426 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.24
427 TestStartStop/group/no-preload/serial/SecondStart 377.47
428 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.4
429 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 378.1
438 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
439 TestStartStop/group/old-k8s-version/serial/Pause 2.9
441 TestStartStop/group/newest-cni/serial/FirstStart 23.62
442 TestStartStop/group/newest-cni/serial/DeployApp 0
443 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.77
444 TestStartStop/group/newest-cni/serial/Stop 1.43
445 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
446 TestStartStop/group/newest-cni/serial/SecondStart 374.09
447 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
448 TestStartStop/group/embed-certs/serial/Pause 3.21
449 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
450 TestStartStop/group/no-preload/serial/Pause 3.26
451 TestNetworkPlugins/group/auto/Start 45.07
452 TestNetworkPlugins/group/calico/Start 49.81
453 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.28
454 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.33
455 TestNetworkPlugins/group/custom-flannel/Start 54.51
456 TestNetworkPlugins/group/auto/KubeletFlags 0.31
457 TestNetworkPlugins/group/auto/NetCatPod 9.19
458 TestNetworkPlugins/group/auto/DNS 0.16
459 TestNetworkPlugins/group/auto/Localhost 0.13
460 TestNetworkPlugins/group/auto/HairPin 0.13
461 TestNetworkPlugins/group/calico/ControllerPod 6.01
462 TestNetworkPlugins/group/calico/KubeletFlags 0.31
463 TestNetworkPlugins/group/calico/NetCatPod 9.2
464 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
465 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
466 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
467 TestStartStop/group/newest-cni/serial/Pause 3.21
468 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.33
469 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.31
470 TestNetworkPlugins/group/calico/DNS 0.14
471 TestNetworkPlugins/group/calico/Localhost 0.14
472 TestNetworkPlugins/group/calico/HairPin 0.15
473 TestNetworkPlugins/group/enable-default-cni/Start 70.24
474 TestNetworkPlugins/group/flannel/Start 54.88
475 TestNetworkPlugins/group/custom-flannel/DNS 0.17
476 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
477 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
478 TestNetworkPlugins/group/bridge/Start 39.91
479 TestNetworkPlugins/group/kindnet/Start 43.56
480 TestNetworkPlugins/group/flannel/ControllerPod 6.01
481 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
482 TestNetworkPlugins/group/bridge/NetCatPod 9.17
483 TestNetworkPlugins/group/flannel/KubeletFlags 0.33
484 TestNetworkPlugins/group/flannel/NetCatPod 8.2
485 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.33
486 TestNetworkPlugins/group/bridge/DNS 0.16
487 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.28
488 TestNetworkPlugins/group/bridge/Localhost 0.18
489 TestNetworkPlugins/group/flannel/DNS 0.14
490 TestNetworkPlugins/group/bridge/HairPin 0.11
491 TestNetworkPlugins/group/flannel/Localhost 0.12
492 TestNetworkPlugins/group/flannel/HairPin 0.12
493 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
494 TestNetworkPlugins/group/kindnet/KubeletFlags 0.35
495 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
496 TestNetworkPlugins/group/kindnet/NetCatPod 9.24
497 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
498 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
499 TestNetworkPlugins/group/kindnet/DNS 0.17
500 TestNetworkPlugins/group/kindnet/Localhost 0.14
501 TestNetworkPlugins/group/kindnet/HairPin 0.15
x
+
TestDownloadOnly/v1.28.0/json-events (19.51s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-039364 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-039364 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (19.514569654s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (19.51s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1219 02:25:42.566212  257493 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1219 02:25:42.566321  257493 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-253859/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-039364
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-039364: exit status 85 (77.759391ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-039364 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-039364 │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 02:25:23
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 02:25:23.110009  257505 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:25:23.110149  257505 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:25:23.110159  257505 out.go:374] Setting ErrFile to fd 2...
	I1219 02:25:23.110163  257505 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:25:23.110438  257505 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-253859/.minikube/bin
	W1219 02:25:23.110606  257505 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22230-253859/.minikube/config/config.json: open /home/jenkins/minikube-integration/22230-253859/.minikube/config/config.json: no such file or directory
	I1219 02:25:23.111943  257505 out.go:368] Setting JSON to true
	I1219 02:25:23.113393  257505 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4062,"bootTime":1766107061,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 02:25:23.113468  257505 start.go:143] virtualization: kvm guest
	I1219 02:25:23.116955  257505 out.go:99] [download-only-039364] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1219 02:25:23.117162  257505 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22230-253859/.minikube/cache/preloaded-tarball: no such file or directory
	I1219 02:25:23.117233  257505 notify.go:221] Checking for updates...
	I1219 02:25:23.118425  257505 out.go:171] MINIKUBE_LOCATION=22230
	I1219 02:25:23.119656  257505 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 02:25:23.120847  257505 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22230-253859/kubeconfig
	I1219 02:25:23.121989  257505 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-253859/.minikube
	I1219 02:25:23.123061  257505 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1219 02:25:23.125105  257505 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1219 02:25:23.125433  257505 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 02:25:23.150170  257505 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 02:25:23.150312  257505 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 02:25:23.346637  257505 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-12-19 02:25:23.336820824 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 02:25:23.346763  257505 docker.go:319] overlay module found
	I1219 02:25:23.348378  257505 out.go:99] Using the docker driver based on user configuration
	I1219 02:25:23.348429  257505 start.go:309] selected driver: docker
	I1219 02:25:23.348439  257505 start.go:928] validating driver "docker" against <nil>
	I1219 02:25:23.348601  257505 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 02:25:23.414353  257505 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-12-19 02:25:23.402188412 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 02:25:23.414563  257505 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1219 02:25:23.415152  257505 start_flags.go:411] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1219 02:25:23.415325  257505 start_flags.go:975] Wait components to verify : map[apiserver:true system_pods:true]
	I1219 02:25:23.416923  257505 out.go:171] Using Docker driver with root privileges
	I1219 02:25:23.417991  257505 cni.go:84] Creating CNI manager for ""
	I1219 02:25:23.418069  257505 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1219 02:25:23.418081  257505 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1219 02:25:23.418169  257505 start.go:353] cluster config:
	{Name:download-only-039364 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-039364 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 02:25:23.419294  257505 out.go:99] Starting "download-only-039364" primary control-plane node in "download-only-039364" cluster
	I1219 02:25:23.419322  257505 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1219 02:25:23.420345  257505 out.go:99] Pulling base image v0.0.48-1765966054-22186 ...
	I1219 02:25:23.420395  257505 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1219 02:25:23.420521  257505 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1219 02:25:23.439239  257505 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 to local cache
	I1219 02:25:23.440119  257505 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local cache directory
	I1219 02:25:23.440244  257505 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 to local cache
	I1219 02:25:23.691195  257505 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I1219 02:25:23.691238  257505 cache.go:65] Caching tarball of preloaded images
	I1219 02:25:23.692034  257505 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1219 02:25:23.693677  257505 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1219 02:25:23.693708  257505 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4 from gcs api...
	I1219 02:25:23.804726  257505 preload.go:295] Got checksum from GCS API "2746dfda401436a5341e0500068bf339"
	I1219 02:25:23.804879  257505 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:2746dfda401436a5341e0500068bf339 -> /home/jenkins/minikube-integration/22230-253859/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I1219 02:25:35.416491  257505 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1219 02:25:35.416903  257505 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/download-only-039364/config.json ...
	I1219 02:25:35.416944  257505 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/download-only-039364/config.json: {Name:mk2eabff3301e24c54f776369fcc85ca67212f08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 02:25:35.417749  257505 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1219 02:25:35.418369  257505 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/22230-253859/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-039364 host does not exist
	  To start a cluster, run: "minikube start -p download-only-039364"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-039364
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/json-events (10.8s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-353965 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-353965 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (10.79862201s)
--- PASS: TestDownloadOnly/v1.34.3/json-events (10.80s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/preload-exists
I1219 02:25:53.839367  257493 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
I1219 02:25:53.839400  257493 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-253859/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-353965
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-353965: exit status 85 (76.533057ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-039364 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-039364 │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │ 19 Dec 25 02:25 UTC │
	│ delete  │ -p download-only-039364                                                                                                                                                               │ download-only-039364 │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │ 19 Dec 25 02:25 UTC │
	│ start   │ -o=json --download-only -p download-only-353965 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-353965 │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 02:25:43
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 02:25:43.095686  257911 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:25:43.095834  257911 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:25:43.095847  257911 out.go:374] Setting ErrFile to fd 2...
	I1219 02:25:43.095861  257911 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:25:43.096093  257911 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-253859/.minikube/bin
	I1219 02:25:43.096612  257911 out.go:368] Setting JSON to true
	I1219 02:25:43.097541  257911 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4082,"bootTime":1766107061,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 02:25:43.097624  257911 start.go:143] virtualization: kvm guest
	I1219 02:25:43.099619  257911 out.go:99] [download-only-353965] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 02:25:43.099765  257911 notify.go:221] Checking for updates...
	I1219 02:25:43.100956  257911 out.go:171] MINIKUBE_LOCATION=22230
	I1219 02:25:43.102156  257911 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 02:25:43.103226  257911 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22230-253859/kubeconfig
	I1219 02:25:43.104295  257911 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-253859/.minikube
	I1219 02:25:43.105292  257911 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1219 02:25:43.107309  257911 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1219 02:25:43.107681  257911 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 02:25:43.133722  257911 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 02:25:43.133869  257911 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 02:25:43.195301  257911 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-12-19 02:25:43.18585448 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 02:25:43.195416  257911 docker.go:319] overlay module found
	I1219 02:25:43.196749  257911 out.go:99] Using the docker driver based on user configuration
	I1219 02:25:43.196777  257911 start.go:309] selected driver: docker
	I1219 02:25:43.196784  257911 start.go:928] validating driver "docker" against <nil>
	I1219 02:25:43.196889  257911 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 02:25:43.251560  257911 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-12-19 02:25:43.240742377 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 02:25:43.251817  257911 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1219 02:25:43.252532  257911 start_flags.go:411] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1219 02:25:43.252736  257911 start_flags.go:975] Wait components to verify : map[apiserver:true system_pods:true]
	I1219 02:25:43.254528  257911 out.go:171] Using Docker driver with root privileges
	I1219 02:25:43.255617  257911 cni.go:84] Creating CNI manager for ""
	I1219 02:25:43.255683  257911 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1219 02:25:43.255693  257911 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1219 02:25:43.255766  257911 start.go:353] cluster config:
	{Name:download-only-353965 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:download-only-353965 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 02:25:43.256998  257911 out.go:99] Starting "download-only-353965" primary control-plane node in "download-only-353965" cluster
	I1219 02:25:43.257015  257911 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1219 02:25:43.258357  257911 out.go:99] Pulling base image v0.0.48-1765966054-22186 ...
	I1219 02:25:43.258397  257911 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1219 02:25:43.258490  257911 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1219 02:25:43.276680  257911 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 to local cache
	I1219 02:25:43.276831  257911 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local cache directory
	I1219 02:25:43.276861  257911 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local cache directory, skipping pull
	I1219 02:25:43.276868  257911 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in cache, skipping pull
	I1219 02:25:43.276878  257911 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 as a tarball
	I1219 02:25:43.359254  257911 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-amd64.tar.lz4
	I1219 02:25:43.359289  257911 cache.go:65] Caching tarball of preloaded images
	I1219 02:25:43.360093  257911 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1219 02:25:43.361665  257911 out.go:99] Downloading Kubernetes v1.34.3 preload ...
	I1219 02:25:43.361687  257911 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-amd64.tar.lz4 from gcs api...
	I1219 02:25:43.470391  257911 preload.go:295] Got checksum from GCS API "8ed8b49ee38344137d62ea681aa755ac"
	I1219 02:25:43.470453  257911 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-amd64.tar.lz4?checksum=md5:8ed8b49ee38344137d62ea681aa755ac -> /home/jenkins/minikube-integration/22230-253859/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-amd64.tar.lz4
	I1219 02:25:52.941976  257911 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on containerd
	I1219 02:25:52.942360  257911 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/download-only-353965/config.json ...
	I1219 02:25:52.942398  257911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/download-only-353965/config.json: {Name:mk18af04a7f2afa0f31c7f4b992ccb4d38b454a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 02:25:52.942636  257911 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1219 02:25:52.942826  257911 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/22230-253859/.minikube/cache/linux/amd64/v1.34.3/kubectl
	
	
	* The control-plane node download-only-353965 host does not exist
	  To start a cluster, run: "minikube start -p download-only-353965"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.3/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-353965
--- PASS: TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/json-events (11.81s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-091459 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-091459 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (11.811710699s)
--- PASS: TestDownloadOnly/v1.35.0-rc.1/json-events (11.81s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/preload-exists
I1219 02:26:06.156221  257493 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
I1219 02:26:06.156350  257493 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-253859/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-rc.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-091459
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-091459: exit status 85 (76.928ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                            ARGS                                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-039364 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd      │ download-only-039364 │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                      │ minikube             │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │ 19 Dec 25 02:25 UTC │
	│ delete  │ -p download-only-039364                                                                                                                                                                    │ download-only-039364 │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │ 19 Dec 25 02:25 UTC │
	│ start   │ -o=json --download-only -p download-only-353965 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd      │ download-only-353965 │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                      │ minikube             │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │ 19 Dec 25 02:25 UTC │
	│ delete  │ -p download-only-353965                                                                                                                                                                    │ download-only-353965 │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │ 19 Dec 25 02:25 UTC │
	│ start   │ -o=json --download-only -p download-only-091459 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-091459 │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 02:25:54
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 02:25:54.397881  258284 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:25:54.398181  258284 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:25:54.398191  258284 out.go:374] Setting ErrFile to fd 2...
	I1219 02:25:54.398197  258284 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:25:54.398425  258284 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-253859/.minikube/bin
	I1219 02:25:54.398978  258284 out.go:368] Setting JSON to true
	I1219 02:25:54.399933  258284 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4093,"bootTime":1766107061,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 02:25:54.399992  258284 start.go:143] virtualization: kvm guest
	I1219 02:25:54.444031  258284 out.go:99] [download-only-091459] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 02:25:54.444282  258284 notify.go:221] Checking for updates...
	I1219 02:25:54.517312  258284 out.go:171] MINIKUBE_LOCATION=22230
	I1219 02:25:54.590604  258284 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 02:25:54.662764  258284 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22230-253859/kubeconfig
	I1219 02:25:54.736563  258284 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-253859/.minikube
	I1219 02:25:54.809783  258284 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1219 02:25:54.954865  258284 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1219 02:25:54.955243  258284 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 02:25:54.977716  258284 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 02:25:54.977818  258284 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 02:25:55.035136  258284 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-12-19 02:25:55.025530294 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 02:25:55.035242  258284 docker.go:319] overlay module found
	I1219 02:25:55.080884  258284 out.go:99] Using the docker driver based on user configuration
	I1219 02:25:55.080950  258284 start.go:309] selected driver: docker
	I1219 02:25:55.080961  258284 start.go:928] validating driver "docker" against <nil>
	I1219 02:25:55.081095  258284 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 02:25:55.138868  258284 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-12-19 02:25:55.129711451 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 02:25:55.139043  258284 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1219 02:25:55.139551  258284 start_flags.go:411] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1219 02:25:55.139709  258284 start_flags.go:975] Wait components to verify : map[apiserver:true system_pods:true]
	I1219 02:25:55.194044  258284 out.go:171] Using Docker driver with root privileges
	I1219 02:25:55.280467  258284 cni.go:84] Creating CNI manager for ""
	I1219 02:25:55.280549  258284 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1219 02:25:55.280561  258284 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1219 02:25:55.280660  258284 start.go:353] cluster config:
	{Name:download-only-091459 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:download-only-091459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 02:25:55.381828  258284 out.go:99] Starting "download-only-091459" primary control-plane node in "download-only-091459" cluster
	I1219 02:25:55.381870  258284 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1219 02:25:55.445364  258284 out.go:99] Pulling base image v0.0.48-1765966054-22186 ...
	I1219 02:25:55.445453  258284 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1219 02:25:55.445501  258284 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1219 02:25:55.462602  258284 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 to local cache
	I1219 02:25:55.462750  258284 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local cache directory
	I1219 02:25:55.462766  258284 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local cache directory, skipping pull
	I1219 02:25:55.462771  258284 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in cache, skipping pull
	I1219 02:25:55.462779  258284 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 as a tarball
	I1219 02:25:55.548905  258284 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-amd64.tar.lz4
	I1219 02:25:55.548945  258284 cache.go:65] Caching tarball of preloaded images
	I1219 02:25:55.549782  258284 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1219 02:25:55.591420  258284 out.go:99] Downloading Kubernetes v1.35.0-rc.1 preload ...
	I1219 02:25:55.591456  258284 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-amd64.tar.lz4 from gcs api...
	I1219 02:25:55.706923  258284 preload.go:295] Got checksum from GCS API "ffe652c02cd8d6c779ed399620f0c4bd"
	I1219 02:25:55.706971  258284 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-amd64.tar.lz4?checksum=md5:ffe652c02cd8d6c779ed399620f0c4bd -> /home/jenkins/minikube-integration/22230-253859/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-091459 host does not exist
	  To start a cluster, run: "minikube start -p download-only-091459"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-091459
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.41s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-481825 --alsologtostderr --driver=docker  --container-runtime=containerd
helpers_test.go:176: Cleaning up "download-docker-481825" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-481825
--- PASS: TestDownloadOnlyKic (0.41s)

                                                
                                    
x
+
TestBinaryMirror (0.83s)

                                                
                                                
=== RUN   TestBinaryMirror
I1219 02:26:07.488257  257493 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-667411 --alsologtostderr --binary-mirror http://127.0.0.1:34169 --driver=docker  --container-runtime=containerd
helpers_test.go:176: Cleaning up "binary-mirror-667411" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-667411
--- PASS: TestBinaryMirror (0.83s)

                                                
                                    
x
+
TestOffline (59.21s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-518296 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-518296 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (56.555905622s)
helpers_test.go:176: Cleaning up "offline-containerd-518296" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-518296
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-518296: (2.651985193s)
--- PASS: TestOffline (59.21s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-367973
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-367973: exit status 85 (70.434691ms)

                                                
                                                
-- stdout --
	* Profile "addons-367973" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-367973"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-367973
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-367973: exit status 85 (70.896179ms)

                                                
                                                
-- stdout --
	* Profile "addons-367973" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-367973"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (123.81s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-367973 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-367973 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m3.811924807s)
--- PASS: TestAddons/Setup (123.81s)

                                                
                                    
x
+
TestAddons/serial/Volcano (43.07s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:886: volcano-controller stabilized in 15.153236ms
addons_test.go:878: volcano-admission stabilized in 15.347888ms
addons_test.go:870: volcano-scheduler stabilized in 15.730459ms
addons_test.go:892: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-scheduler-76c996c8bf-5gsk5" [12957aad-af2a-4c78-b101-7884ab42f24c] Running
addons_test.go:892: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003125754s
addons_test.go:896: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-admission-6c447bd768-cpp25" [70166f95-a979-4767-9e1a-032232d889b9] Running
addons_test.go:896: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 6.004268022s
addons_test.go:900: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-controllers-6fd4f85cb8-vg777" [d4dd62b2-3c47-44a0-98d4-a2eb47baa8e6] Running
addons_test.go:900: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003807899s
addons_test.go:905: (dbg) Run:  kubectl --context addons-367973 delete -n volcano-system job volcano-admission-init
addons_test.go:911: (dbg) Run:  kubectl --context addons-367973 create -f testdata/vcjob.yaml
addons_test.go:919: (dbg) Run:  kubectl --context addons-367973 get vcjob -n my-volcano
addons_test.go:937: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:353: "test-job-nginx-0" [eececa46-58f0-4e6c-8a89-e5254b519368] Pending
helpers_test.go:353: "test-job-nginx-0" [eececa46-58f0-4e6c-8a89-e5254b519368] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "test-job-nginx-0" [eececa46-58f0-4e6c-8a89-e5254b519368] Running
addons_test.go:937: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 15.003523639s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-367973 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-367973 addons disable volcano --alsologtostderr -v=1: (11.706975325s)
--- PASS: TestAddons/serial/Volcano (43.07s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-367973 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-367973 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.47s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-367973 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-367973 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [6da61793-18b2-4a50-80e3-6d4f6ae38e81] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [6da61793-18b2-4a50-80e3-6d4f6ae38e81] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.00328218s
addons_test.go:696: (dbg) Run:  kubectl --context addons-367973 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-367973 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-367973 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.47s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.33s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 3.508639ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-9t5h9" [ce170891-3066-4e13-989a-2e53c778f6e1] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.00286416s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-2ws6r" [e517b601-082f-4174-89a1-9cd9effcee9a] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003797475s
addons_test.go:394: (dbg) Run:  kubectl --context addons-367973 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-367973 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-367973 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.413285438s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-amd64 -p addons-367973 ip
2025/12/19 02:29:33 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-367973 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.33s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.68s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 2.782978ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-367973
addons_test.go:334: (dbg) Run:  kubectl --context addons-367973 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-367973 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.68s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.11s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-367973 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-367973 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-367973 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [e8677e7e-5a7a-492e-be07-6a29ed315a94] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [e8677e7e-5a7a-492e-be07-6a29ed315a94] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004079044s
I1219 02:29:25.956960  257493 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-367973 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-367973 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-367973 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-367973 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-367973 addons disable ingress-dns --alsologtostderr -v=1: (1.056815947s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-367973 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-367973 addons disable ingress --alsologtostderr -v=1: (7.75059487s)
--- PASS: TestAddons/parallel/Ingress (20.11s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.76s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-crzvg" [2aed7eb4-d94b-48ad-bdf4-5e3e93abbaed] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003125081s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-367973 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-367973 addons disable inspektor-gadget --alsologtostderr -v=1: (5.756126733s)
--- PASS: TestAddons/parallel/InspektorGadget (10.76s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.7s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 3.051794ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-btjx8" [2e99caef-e0dc-4bf2-92ae-13609a9c536d] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003033843s
addons_test.go:465: (dbg) Run:  kubectl --context addons-367973 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-367973 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.70s)

                                                
                                    
x
+
TestAddons/parallel/CSI (51.13s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1219 02:29:38.381019  257493 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1219 02:29:38.384769  257493 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1219 02:29:38.384795  257493 kapi.go:107] duration metric: took 3.785228ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 3.795874ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-367973 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-367973 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-367973 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-367973 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-367973 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-367973 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-367973 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-367973 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-367973 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-367973 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-367973 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-367973 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-367973 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-367973 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-367973 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-367973 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-367973 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-367973 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [50f8d507-290d-4a09-a4bc-f103229a3ab3] Pending
helpers_test.go:353: "task-pv-pod" [50f8d507-290d-4a09-a4bc-f103229a3ab3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [50f8d507-290d-4a09-a4bc-f103229a3ab3] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003737399s
addons_test.go:574: (dbg) Run:  kubectl --context addons-367973 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-367973 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-367973 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-367973 delete pod task-pv-pod
addons_test.go:584: (dbg) Done: kubectl --context addons-367973 delete pod task-pv-pod: (1.133807865s)
addons_test.go:590: (dbg) Run:  kubectl --context addons-367973 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-367973 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-367973 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-367973 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-367973 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-367973 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-367973 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-367973 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-367973 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-367973 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-367973 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-367973 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-367973 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-367973 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-367973 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [a823d3c9-c8c6-4a1f-9506-86e1cd3f6afe] Pending
helpers_test.go:353: "task-pv-pod-restore" [a823d3c9-c8c6-4a1f-9506-86e1cd3f6afe] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 6.003601622s
addons_test.go:616: (dbg) Run:  kubectl --context addons-367973 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-367973 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-367973 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-367973 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-367973 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-367973 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.555664006s)
--- PASS: TestAddons/parallel/CSI (51.13s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (21.56s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-367973 --alsologtostderr -v=1
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-dfcdc64b-2cjbk" [88c983e8-c691-4deb-8c3c-db239f132b9d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-dfcdc64b-2cjbk" [88c983e8-c691-4deb-8c3c-db239f132b9d] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.003913113s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-367973 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-367973 addons disable headlamp --alsologtostderr -v=1: (5.704701327s)
--- PASS: TestAddons/parallel/Headlamp (21.56s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-pq56d" [ad3b9838-c62e-481a-be18-1abd2220454c] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003541857s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-367973 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.55s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (12.16s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-367973 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-367973 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-367973 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-367973 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-367973 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-367973 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-367973 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-367973 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-367973 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [be9246ef-f1ad-400e-9e76-b5791ed36364] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [be9246ef-f1ad-400e-9e76-b5791ed36364] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [be9246ef-f1ad-400e-9e76-b5791ed36364] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003280792s
addons_test.go:969: (dbg) Run:  kubectl --context addons-367973 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-amd64 -p addons-367973 ssh "cat /opt/local-path-provisioner/pvc-e7e4f1d0-1c7c-4ac7-975c-176ed1f10f0d_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-367973 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-367973 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-367973 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (12.16s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.55s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-skdd7" [36c5b64a-38e1-4712-87c9-310eeb92d22a] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003678569s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-367973 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.55s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-6654c87f9b-b4vx7" [b58ff374-a9d3-4269-8463-9aad721a3dd6] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003609099s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-367973 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-367973 addons disable yakd --alsologtostderr -v=1: (5.728892724s)
--- PASS: TestAddons/parallel/Yakd (10.73s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.52s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:353: "amd-gpu-device-plugin-4xt98" [25f259f9-0ba4-4fbb-8b61-3f2bdee5afa7] Running
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003742231s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-367973 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (5.52s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.61s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-367973
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-367973: (12.290237659s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-367973
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-367973
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-367973
--- PASS: TestAddons/StoppedEnableDisable (12.61s)

                                                
                                    
x
+
TestCertOptions (28.69s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-967008 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-967008 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (25.333674968s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-967008 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-967008 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-967008 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-967008" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-967008
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-967008: (2.461829377s)
--- PASS: TestCertOptions (28.69s)

                                                
                                    
x
+
TestCertExpiration (212.64s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-253327 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-253327 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (23.718257234s)
E1219 03:01:19.191710  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-453239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-253327 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-253327 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (6.1638816s)
helpers_test.go:176: Cleaning up "cert-expiration-253327" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-253327
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-253327: (2.753294988s)
--- PASS: TestCertExpiration (212.64s)

                                                
                                    
x
+
TestForceSystemdFlag (27.07s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-887253 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-887253 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (23.879670611s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-887253 ssh "cat /etc/containerd/config.toml"
helpers_test.go:176: Cleaning up "force-systemd-flag-887253" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-887253
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-887253: (2.849610626s)
--- PASS: TestForceSystemdFlag (27.07s)

                                                
                                    
x
+
TestForceSystemdEnv (40.11s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-549148 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-549148 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (37.257915092s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-549148 ssh "cat /etc/containerd/config.toml"
helpers_test.go:176: Cleaning up "force-systemd-env-549148" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-549148
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-549148: (2.342921029s)
--- PASS: TestForceSystemdEnv (40.11s)

                                                
                                    
x
+
TestDockerEnvContainerd (37.99s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux amd64
docker_test.go:181: (dbg) Run:  out/minikube-linux-amd64 start -p dockerenv-614338 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-amd64 start -p dockerenv-614338 --driver=docker  --container-runtime=containerd: (22.083760405s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-614338"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXTsE1TP/agent.282050" SSH_AGENT_PID="282051" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXTsE1TP/agent.282050" SSH_AGENT_PID="282051" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXTsE1TP/agent.282050" SSH_AGENT_PID="282051" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (2.000886247s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXTsE1TP/agent.282050" SSH_AGENT_PID="282051" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls"
helpers_test.go:176: Cleaning up "dockerenv-614338" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p dockerenv-614338
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p dockerenv-614338: (2.356167363s)
--- PASS: TestDockerEnvContainerd (37.99s)

                                                
                                    
x
+
TestErrorSpam/setup (19.3s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-455051 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-455051 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-455051 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-455051 --driver=docker  --container-runtime=containerd: (19.300749561s)
--- PASS: TestErrorSpam/setup (19.30s)

                                                
                                    
x
+
TestErrorSpam/start (0.67s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-455051 --log_dir /tmp/nospam-455051 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-455051 --log_dir /tmp/nospam-455051 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-455051 --log_dir /tmp/nospam-455051 start --dry-run
--- PASS: TestErrorSpam/start (0.67s)

                                                
                                    
x
+
TestErrorSpam/status (0.97s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-455051 --log_dir /tmp/nospam-455051 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-455051 --log_dir /tmp/nospam-455051 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-455051 --log_dir /tmp/nospam-455051 status
--- PASS: TestErrorSpam/status (0.97s)

                                                
                                    
x
+
TestErrorSpam/pause (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-455051 --log_dir /tmp/nospam-455051 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-455051 --log_dir /tmp/nospam-455051 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-455051 --log_dir /tmp/nospam-455051 pause
--- PASS: TestErrorSpam/pause (1.47s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-455051 --log_dir /tmp/nospam-455051 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-455051 --log_dir /tmp/nospam-455051 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-455051 --log_dir /tmp/nospam-455051 unpause
--- PASS: TestErrorSpam/unpause (1.54s)

                                                
                                    
x
+
TestErrorSpam/stop (2.14s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-455051 --log_dir /tmp/nospam-455051 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-455051 --log_dir /tmp/nospam-455051 stop: (1.931590432s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-455051 --log_dir /tmp/nospam-455051 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-455051 --log_dir /tmp/nospam-455051 stop
--- PASS: TestErrorSpam/stop (2.14s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22230-253859/.minikube/files/etc/test/nested/copy/257493/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (42.48s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-180941 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-180941 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (42.480284322s)
--- PASS: TestFunctional/serial/StartWithProxy (42.48s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.96s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1219 02:32:39.261545  257493 config.go:182] Loaded profile config "functional-180941": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-180941 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-180941 --alsologtostderr -v=8: (5.962282988s)
functional_test.go:678: soft start took 5.963063175s for "functional-180941" cluster.
I1219 02:32:45.224279  257493 config.go:182] Loaded profile config "functional-180941": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/SoftStart (5.96s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-180941 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.57s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.57s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-180941 /tmp/TestFunctionalserialCacheCmdcacheadd_local3684130980/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 cache add minikube-local-cache-test:functional-180941
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-180941 cache add minikube-local-cache-test:functional-180941: (1.731026099s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 cache delete minikube-local-cache-test:functional-180941
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-180941
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.6s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-180941 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (299.723786ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.60s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 kubectl -- --context functional-180941 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-180941 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.17s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-180941 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1219 02:33:12.815760  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/addons-367973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:33:12.821281  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/addons-367973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:33:12.831596  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/addons-367973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:33:12.851907  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/addons-367973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:33:12.892257  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/addons-367973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:33:12.972803  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/addons-367973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:33:13.133259  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/addons-367973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:33:13.453773  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/addons-367973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:33:14.094716  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/addons-367973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:33:15.375228  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/addons-367973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:33:17.935969  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/addons-367973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:33:23.056155  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/addons-367973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-180941 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.169412887s)
functional_test.go:776: restart took 39.16954312s for "functional-180941" cluster.
I1219 02:33:31.622292  257493 config.go:182] Loaded profile config "functional-180941": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/ExtraConfig (39.17s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-180941 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-180941 logs: (1.266488957s)
--- PASS: TestFunctional/serial/LogsCmd (1.27s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 logs --file /tmp/TestFunctionalserialLogsFileCmd247725948/001/logs.txt
E1219 02:33:33.296534  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/addons-367973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-180941 logs --file /tmp/TestFunctionalserialLogsFileCmd247725948/001/logs.txt: (1.284928316s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.29s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.76s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-180941 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-180941
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-180941: exit status 115 (377.513693ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30797 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-180941 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-180941 delete -f testdata/invalidsvc.yaml: (1.167972964s)
--- PASS: TestFunctional/serial/InvalidService (4.76s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-180941 config get cpus: exit status 14 (89.549051ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-180941 config get cpus: exit status 14 (86.831695ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-180941 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-180941 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (201.189797ms)

                                                
                                                
-- stdout --
	* [functional-180941] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22230
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22230-253859/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-253859/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 02:33:40.991006  299001 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:33:40.991357  299001 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:33:40.991366  299001 out.go:374] Setting ErrFile to fd 2...
	I1219 02:33:40.991372  299001 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:33:40.992090  299001 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-253859/.minikube/bin
	I1219 02:33:40.992749  299001 out.go:368] Setting JSON to false
	I1219 02:33:40.994252  299001 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4560,"bootTime":1766107061,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 02:33:40.994327  299001 start.go:143] virtualization: kvm guest
	I1219 02:33:40.996290  299001 out.go:179] * [functional-180941] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 02:33:40.998190  299001 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 02:33:40.998205  299001 notify.go:221] Checking for updates...
	I1219 02:33:41.000420  299001 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 02:33:41.002382  299001 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-253859/kubeconfig
	I1219 02:33:41.003575  299001 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-253859/.minikube
	I1219 02:33:41.004667  299001 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 02:33:41.005847  299001 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 02:33:41.007314  299001 config.go:182] Loaded profile config "functional-180941": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 02:33:41.007956  299001 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 02:33:41.035018  299001 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 02:33:41.035121  299001 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 02:33:41.106480  299001 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-19 02:33:41.093714131 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 02:33:41.106639  299001 docker.go:319] overlay module found
	I1219 02:33:41.108189  299001 out.go:179] * Using the docker driver based on existing profile
	I1219 02:33:41.109869  299001 start.go:309] selected driver: docker
	I1219 02:33:41.109889  299001 start.go:928] validating driver "docker" against &{Name:functional-180941 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:functional-180941 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 02:33:41.110014  299001 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 02:33:41.111732  299001 out.go:203] 
	W1219 02:33:41.115401  299001 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1219 02:33:41.116854  299001 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-180941 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-180941 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-180941 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (192.759295ms)

                                                
                                                
-- stdout --
	* [functional-180941] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22230
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22230-253859/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-253859/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 02:33:41.013485  299018 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:33:41.013597  299018 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:33:41.013604  299018 out.go:374] Setting ErrFile to fd 2...
	I1219 02:33:41.013611  299018 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:33:41.013947  299018 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-253859/.minikube/bin
	I1219 02:33:41.014422  299018 out.go:368] Setting JSON to false
	I1219 02:33:41.015552  299018 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4560,"bootTime":1766107061,"procs":235,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 02:33:41.015634  299018 start.go:143] virtualization: kvm guest
	I1219 02:33:41.017073  299018 out.go:179] * [functional-180941] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1219 02:33:41.018481  299018 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 02:33:41.018534  299018 notify.go:221] Checking for updates...
	I1219 02:33:41.020556  299018 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 02:33:41.022146  299018 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-253859/kubeconfig
	I1219 02:33:41.023123  299018 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-253859/.minikube
	I1219 02:33:41.024168  299018 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 02:33:41.025330  299018 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 02:33:41.027192  299018 config.go:182] Loaded profile config "functional-180941": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 02:33:41.028050  299018 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 02:33:41.057150  299018 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 02:33:41.057291  299018 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 02:33:41.122840  299018 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-19 02:33:41.112261861 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 02:33:41.122985  299018 docker.go:319] overlay module found
	I1219 02:33:41.124316  299018 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1219 02:33:41.125451  299018 start.go:309] selected driver: docker
	I1219 02:33:41.125470  299018 start.go:928] validating driver "docker" against &{Name:functional-180941 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:functional-180941 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 02:33:41.125626  299018 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 02:33:41.127420  299018 out.go:203] 
	W1219 02:33:41.128381  299018 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1219 02:33:41.129351  299018 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (22.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-180941 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-180941 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-cmmxr" [f14500b1-7be9-4e64-ba7b-b824b9faef10] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-cmmxr" [f14500b1-7be9-4e64-ba7b-b824b9faef10] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 22.004216557s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31414
functional_test.go:1680: http://192.168.49.2:31414: success! body:
Request served by hello-node-connect-7d85dfc575-cmmxr

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31414
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (22.81s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (35.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [e867e906-a63a-422b-beae-6e1d81b5654a] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003773616s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-180941 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-180941 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-180941 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-180941 apply -f testdata/storage-provisioner/pod.yaml
I1219 02:33:56.331893  257493 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [744af00f-bfb1-4b7c-9d83-6b338d0a6b2e] Pending
helpers_test.go:353: "sp-pod" [744af00f-bfb1-4b7c-9d83-6b338d0a6b2e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [744af00f-bfb1-4b7c-9d83-6b338d0a6b2e] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 22.003828045s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-180941 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-180941 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-180941 delete -f testdata/storage-provisioner/pod.yaml: (1.697242564s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-180941 apply -f testdata/storage-provisioner/pod.yaml
I1219 02:34:20.321846  257493 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [29413d33-c36e-4f09-9fb4-98735c2b1651] Pending
helpers_test.go:353: "sp-pod" [29413d33-c36e-4f09-9fb4-98735c2b1651] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003930592s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-180941 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (35.57s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 ssh -n functional-180941 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 cp functional-180941:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4160514187/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 ssh -n functional-180941 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 ssh -n functional-180941 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (39.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-180941 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-6bcdcbc558-vsb7g" [5db7995e-1f05-4321-8780-6ec942c775cc] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-6bcdcbc558-vsb7g" [5db7995e-1f05-4321-8780-6ec942c775cc] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 33.004137353s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-180941 exec mysql-6bcdcbc558-vsb7g -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-180941 exec mysql-6bcdcbc558-vsb7g -- mysql -ppassword -e "show databases;": exit status 1 (117.250528ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1219 02:34:30.285009  257493 retry.go:31] will retry after 614.634413ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-180941 exec mysql-6bcdcbc558-vsb7g -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-180941 exec mysql-6bcdcbc558-vsb7g -- mysql -ppassword -e "show databases;": exit status 1 (171.165616ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1219 02:34:31.072034  257493 retry.go:31] will retry after 1.844720565s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-180941 exec mysql-6bcdcbc558-vsb7g -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-180941 exec mysql-6bcdcbc558-vsb7g -- mysql -ppassword -e "show databases;": exit status 1 (163.421894ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1219 02:34:33.081512  257493 retry.go:31] will retry after 2.844046621s: exit status 1
E1219 02:34:34.738036  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/addons-367973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1812: (dbg) Run:  kubectl --context functional-180941 exec mysql-6bcdcbc558-vsb7g -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (39.08s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/257493/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 ssh "sudo cat /etc/test/nested/copy/257493/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/257493.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 ssh "sudo cat /etc/ssl/certs/257493.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/257493.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 ssh "sudo cat /usr/share/ca-certificates/257493.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/2574932.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 ssh "sudo cat /etc/ssl/certs/2574932.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/2574932.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 ssh "sudo cat /usr/share/ca-certificates/2574932.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-180941 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-180941 ssh "sudo systemctl is-active docker": exit status 1 (293.548462ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-180941 ssh "sudo systemctl is-active crio": exit status 1 (298.641634ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-180941 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-180941 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-4z4zz" [513d90b1-bb58-4604-96fc-4a7ff1689c05] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-4z4zz" [513d90b1-bb58-4604-96fc-4a7ff1689c05] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.003984858s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.19s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "408.393045ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "76.357881ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-180941 /tmp/TestFunctionalparallelMountCmdany-port240185404/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1766111620159455319" to /tmp/TestFunctionalparallelMountCmdany-port240185404/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1766111620159455319" to /tmp/TestFunctionalparallelMountCmdany-port240185404/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1766111620159455319" to /tmp/TestFunctionalparallelMountCmdany-port240185404/001/test-1766111620159455319
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-180941 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (341.010343ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1219 02:33:40.500891  257493 retry.go:31] will retry after 263.355189ms: exit status 1
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 19 02:33 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 19 02:33 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 19 02:33 test-1766111620159455319
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 ssh cat /mount-9p/test-1766111620159455319
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-180941 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [e428a017-930f-45d6-91a4-41044c3cd159] Pending
helpers_test.go:353: "busybox-mount" [e428a017-930f-45d6-91a4-41044c3cd159] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [e428a017-930f-45d6-91a4-41044c3cd159] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [e428a017-930f-45d6-91a4-41044c3cd159] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003569255s
functional_test_mount_test.go:170: (dbg) Run:  kubectl --context functional-180941 logs busybox-mount
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-180941 /tmp/TestFunctionalparallelMountCmdany-port240185404/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.79s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "381.983127ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "81.812941ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-180941 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.3
registry.k8s.io/kube-proxy:v1.34.3
registry.k8s.io/kube-controller-manager:v1.34.3
registry.k8s.io/kube-apiserver:v1.34.3
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-180941
docker.io/library/kong:3.9
docker.io/kubernetesui/dashboard-web:1.7.0
docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2
docker.io/kubernetesui/dashboard-auth:1.4.0
docker.io/kubernetesui/dashboard-api:1.14.0
docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-180941
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-180941 image ls --format short --alsologtostderr:
I1219 02:34:17.148677  308227 out.go:360] Setting OutFile to fd 1 ...
I1219 02:34:17.148829  308227 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:34:17.148840  308227 out.go:374] Setting ErrFile to fd 2...
I1219 02:34:17.148846  308227 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:34:17.149183  308227 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-253859/.minikube/bin
I1219 02:34:17.150010  308227 config.go:182] Loaded profile config "functional-180941": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1219 02:34:17.150171  308227 config.go:182] Loaded profile config "functional-180941": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1219 02:34:17.150692  308227 cli_runner.go:164] Run: docker container inspect functional-180941 --format={{.State.Status}}
I1219 02:34:17.171842  308227 ssh_runner.go:195] Run: systemctl --version
I1219 02:34:17.171906  308227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-180941
I1219 02:34:17.192794  308227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/functional-180941/id_rsa Username:docker}
I1219 02:34:17.305203  308227 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-180941 image ls --format table --alsologtostderr:
┌──────────────────────────────────────────────────┬───────────────────────────────────────┬───────────────┬────────┐
│                      IMAGE                       │                  TAG                  │   IMAGE ID    │  SIZE  │
├──────────────────────────────────────────────────┼───────────────────────────────────────┼───────────────┼────────┤
│ docker.io/kubernetesui/dashboard-api             │ 1.14.0                                │ sha256:a0607a │ 16.5MB │
│ registry.k8s.io/kube-apiserver                   │ v1.34.3                               │ sha256:aa2709 │ 27.1MB │
│ registry.k8s.io/pause                            │ 3.3                                   │ sha256:0184c1 │ 298kB  │
│ docker.io/kindest/kindnetd                       │ v20250512-df8de77b                    │ sha256:409467 │ 44.4MB │
│ registry.k8s.io/kube-controller-manager          │ v1.34.3                               │ sha256:5826b2 │ 22.8MB │
│ registry.k8s.io/kube-proxy                       │ v1.34.3                               │ sha256:36eef8 │ 26MB   │
│ registry.k8s.io/pause                            │ 3.10.1                                │ sha256:cd073f │ 320kB  │
│ registry.k8s.io/pause                            │ latest                                │ sha256:350b16 │ 72.3kB │
│ gcr.io/k8s-minikube/busybox                      │ 1.28.4-glibc                          │ sha256:56cc51 │ 2.4MB  │
│ gcr.io/k8s-minikube/storage-provisioner          │ v5                                    │ sha256:6e38f4 │ 9.06MB │
│ public.ecr.aws/nginx/nginx                       │ alpine                                │ sha256:04da2b │ 23MB   │
│ registry.k8s.io/coredns/coredns                  │ v1.12.1                               │ sha256:52546a │ 22.4MB │
│ registry.k8s.io/etcd                             │ 3.6.5-0                               │ sha256:a3e246 │ 22.9MB │
│ registry.k8s.io/pause                            │ 3.1                                   │ sha256:da86e6 │ 315kB  │
│ docker.io/kicbase/echo-server                    │ functional-180941                     │ sha256:9056ab │ 2.37MB │
│ docker.io/kicbase/echo-server                    │ latest                                │ sha256:9056ab │ 2.37MB │
│ docker.io/kindest/kindnetd                       │ v20251212-v0.29.0-alpha-105-g20ccfc88 │ sha256:4921d7 │ 42.7MB │
│ docker.io/kubernetesui/dashboard-auth            │ 1.4.0                                 │ sha256:dd5437 │ 14.5MB │
│ docker.io/kubernetesui/dashboard-metrics-scraper │ 1.2.2                                 │ sha256:d9cbc9 │ 13MB   │
│ docker.io/kubernetesui/dashboard-web             │ 1.7.0                                 │ sha256:59f642 │ 62.5MB │
│ docker.io/library/kong                           │ 3.9                                   │ sha256:3a9759 │ 120MB  │
│ docker.io/library/minikube-local-cache-test      │ functional-180941                     │ sha256:d0d30d │ 992B   │
│ registry.k8s.io/kube-scheduler                   │ v1.34.3                               │ sha256:aec12d │ 17.4MB │
└──────────────────────────────────────────────────┴───────────────────────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-180941 image ls --format table --alsologtostderr:
I1219 02:34:18.213336  308655 out.go:360] Setting OutFile to fd 1 ...
I1219 02:34:18.213449  308655 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:34:18.213457  308655 out.go:374] Setting ErrFile to fd 2...
I1219 02:34:18.213463  308655 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:34:18.213786  308655 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-253859/.minikube/bin
I1219 02:34:18.214608  308655 config.go:182] Loaded profile config "functional-180941": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1219 02:34:18.214761  308655 config.go:182] Loaded profile config "functional-180941": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1219 02:34:18.215342  308655 cli_runner.go:164] Run: docker container inspect functional-180941 --format={{.State.Status}}
I1219 02:34:18.238398  308655 ssh_runner.go:195] Run: systemctl --version
I1219 02:34:18.238469  308655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-180941
I1219 02:34:18.260728  308655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/functional-180941/id_rsa Username:docker}
I1219 02:34:18.369028  308655 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-180941 image ls --format json --alsologtostderr:
[{"id":"sha256:3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480","repoDigests":["docker.io/library/kong@sha256:4379444ecfd82794b27de38a74ba540e8571683dfdfce74c8ecb4018f308fb29"],"repoTags":["docker.io/library/kong:3.9"],"size":"120420500"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],
"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"22384805"},{"id":"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.3"],"size":"22819474"},{"id":"sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"44375501"},{"id":"sha256:59f642f485d26d479d2dedc7c6139f5ce41939fa22c1152314fefdf3c463aa06","repoDigests":["docker.io/kubernetesui/dashboard-web@sha256:cc7c31bd2d8470e3590dcb20fe980769b43054b31a5c5c0da606e9add898d85d"],"repoTags":["docker.io/kubernetesui/dashboard-web:1.7.0"],"size":"62497108"},{"id":"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78","repoDigests":["r
egistry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.3"],"size":"17382979"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:dd54374d0ab14ab65dd3d5d975f97d5b4aff7b221db479874a4429225b9b22b1","repoDigests":["docker.io/kubernetesui/dashboard-auth@sha256:53e9917898bf98ff2de91f7f9bdedd3545780eb3ac72158889ae031136e9eeff"],"repoTags":["docker.io/kubernetesui/dashboard-auth:1.4.0"],"size":"14450164"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:a0607af4fcd8ae78708c5bd51f34ccf8442967b8e41f56f008cc2884690f2f3b","repoDigests":
["docker.io/kubernetesui/dashboard-api@sha256:96a702cfd3399d9eba23b3d37b09f798a4f51fcd8c8dfa8552c7829ade9c4aff"],"repoTags":["docker.io/kubernetesui/dashboard-api:1.14.0"],"size":"16498766"},{"id":"sha256:d0d30d0680597f758ef8a8a6f5f6d68deb68d33c25d4d1075208813f1bdb60a0","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-180941"],"size":"992"},{"id":"sha256:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5","repoDigests":["public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"22996569"},{"id":"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"22871747"},{"id":"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c","repoDigests":["registry.k8s.io/kube-apiserver
@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.3"],"size":"27064672"},{"id":"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691","repoDigests":["registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.3"],"size":"25964312"},{"id":"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"320448"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6"],"repoTags":["docker.io/kicbase/echo-server:functional-180941","docker.io/kicbase/echo-server:latest"],"size":"2372971"},{"id":"sha256:4921d7a6dffa922dd679732
ba4797085c4f39e9a53bee8b6fdb1d463e8571251","repoDigests":["docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae"],"repoTags":["docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88"],"size":"42673934"},{"id":"sha256:d9cbc9f4053ca11fa6edb814b512ab807a89346934700a3a425fc1e24a3f2167","repoDigests":["docker.io/kubernetesui/dashboard-metrics-scraper@sha256:5154b68252bd601cf85092b6413cb9db224af1ef89cb53009d2070dfccd30775"],"repoTags":["docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2"],"size":"12969394"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-180941 image ls --format json --alsologtostderr:
I1219 02:34:17.918419  308556 out.go:360] Setting OutFile to fd 1 ...
I1219 02:34:17.918560  308556 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:34:17.918573  308556 out.go:374] Setting ErrFile to fd 2...
I1219 02:34:17.918596  308556 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:34:17.918882  308556 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-253859/.minikube/bin
I1219 02:34:17.919647  308556 config.go:182] Loaded profile config "functional-180941": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1219 02:34:17.919794  308556 config.go:182] Loaded profile config "functional-180941": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1219 02:34:17.920386  308556 cli_runner.go:164] Run: docker container inspect functional-180941 --format={{.State.Status}}
I1219 02:34:17.941108  308556 ssh_runner.go:195] Run: systemctl --version
I1219 02:34:17.941172  308556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-180941
I1219 02:34:17.964739  308556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/functional-180941/id_rsa Username:docker}
I1219 02:34:18.076971  308556 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-180941 image ls --format yaml --alsologtostderr:
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:dd54374d0ab14ab65dd3d5d975f97d5b4aff7b221db479874a4429225b9b22b1
repoDigests:
- docker.io/kubernetesui/dashboard-auth@sha256:53e9917898bf98ff2de91f7f9bdedd3545780eb3ac72158889ae031136e9eeff
repoTags:
- docker.io/kubernetesui/dashboard-auth:1.4.0
size: "14450164"
- id: sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "22871747"
- id: sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.3
size: "27064672"
- id: sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691
repoDigests:
- registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6
repoTags:
- registry.k8s.io/kube-proxy:v1.34.3
size: "25964312"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
repoTags:
- docker.io/kicbase/echo-server:functional-180941
- docker.io/kicbase/echo-server:latest
size: "2372971"
- id: sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "44375501"
- id: sha256:3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480
repoDigests:
- docker.io/library/kong@sha256:4379444ecfd82794b27de38a74ba540e8571683dfdfce74c8ecb4018f308fb29
repoTags:
- docker.io/library/kong:3.9
size: "120420500"
- id: sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "320448"
- id: sha256:4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251
repoDigests:
- docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae
repoTags:
- docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
size: "42673934"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "22996569"
- id: sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "22384805"
- id: sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.3
size: "22819474"
- id: sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.3
size: "17382979"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:a0607af4fcd8ae78708c5bd51f34ccf8442967b8e41f56f008cc2884690f2f3b
repoDigests:
- docker.io/kubernetesui/dashboard-api@sha256:96a702cfd3399d9eba23b3d37b09f798a4f51fcd8c8dfa8552c7829ade9c4aff
repoTags:
- docker.io/kubernetesui/dashboard-api:1.14.0
size: "16498766"
- id: sha256:d9cbc9f4053ca11fa6edb814b512ab807a89346934700a3a425fc1e24a3f2167
repoDigests:
- docker.io/kubernetesui/dashboard-metrics-scraper@sha256:5154b68252bd601cf85092b6413cb9db224af1ef89cb53009d2070dfccd30775
repoTags:
- docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2
size: "12969394"
- id: sha256:59f642f485d26d479d2dedc7c6139f5ce41939fa22c1152314fefdf3c463aa06
repoDigests:
- docker.io/kubernetesui/dashboard-web@sha256:cc7c31bd2d8470e3590dcb20fe980769b43054b31a5c5c0da606e9add898d85d
repoTags:
- docker.io/kubernetesui/dashboard-web:1.7.0
size: "62497108"
- id: sha256:d0d30d0680597f758ef8a8a6f5f6d68deb68d33c25d4d1075208813f1bdb60a0
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-180941
size: "992"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-180941 image ls --format yaml --alsologtostderr:
I1219 02:34:17.425246  308347 out.go:360] Setting OutFile to fd 1 ...
I1219 02:34:17.425523  308347 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:34:17.425533  308347 out.go:374] Setting ErrFile to fd 2...
I1219 02:34:17.425537  308347 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:34:17.425726  308347 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-253859/.minikube/bin
I1219 02:34:17.426320  308347 config.go:182] Loaded profile config "functional-180941": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1219 02:34:17.426457  308347 config.go:182] Loaded profile config "functional-180941": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1219 02:34:17.427038  308347 cli_runner.go:164] Run: docker container inspect functional-180941 --format={{.State.Status}}
I1219 02:34:17.449060  308347 ssh_runner.go:195] Run: systemctl --version
I1219 02:34:17.449132  308347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-180941
I1219 02:34:17.470516  308347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/functional-180941/id_rsa Username:docker}
I1219 02:34:17.579819  308347 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-180941 ssh pgrep buildkitd: exit status 1 (333.488161ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 image build -t localhost/my-image:functional-180941 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-180941 image build -t localhost/my-image:functional-180941 testdata/build --alsologtostderr: (3.690551077s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-180941 image build -t localhost/my-image:functional-180941 testdata/build --alsologtostderr:
I1219 02:34:18.026048  308588 out.go:360] Setting OutFile to fd 1 ...
I1219 02:34:18.026414  308588 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:34:18.026428  308588 out.go:374] Setting ErrFile to fd 2...
I1219 02:34:18.026436  308588 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:34:18.026730  308588 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-253859/.minikube/bin
I1219 02:34:18.027473  308588 config.go:182] Loaded profile config "functional-180941": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1219 02:34:18.028324  308588 config.go:182] Loaded profile config "functional-180941": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1219 02:34:18.029123  308588 cli_runner.go:164] Run: docker container inspect functional-180941 --format={{.State.Status}}
I1219 02:34:18.051045  308588 ssh_runner.go:195] Run: systemctl --version
I1219 02:34:18.051098  308588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-180941
I1219 02:34:18.073571  308588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/functional-180941/id_rsa Username:docker}
I1219 02:34:18.183495  308588 build_images.go:162] Building image from path: /tmp/build.4238472469.tar
I1219 02:34:18.183563  308588 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1219 02:34:18.194241  308588 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4238472469.tar
I1219 02:34:18.198718  308588 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4238472469.tar: stat -c "%s %y" /var/lib/minikube/build/build.4238472469.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4238472469.tar': No such file or directory
I1219 02:34:18.198748  308588 ssh_runner.go:362] scp /tmp/build.4238472469.tar --> /var/lib/minikube/build/build.4238472469.tar (3072 bytes)
I1219 02:34:18.223449  308588 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4238472469
I1219 02:34:18.234648  308588 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4238472469 -xf /var/lib/minikube/build/build.4238472469.tar
I1219 02:34:18.245026  308588 containerd.go:394] Building image: /var/lib/minikube/build/build.4238472469
I1219 02:34:18.245107  308588 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.4238472469 --local dockerfile=/var/lib/minikube/build/build.4238472469 --output type=image,name=localhost/my-image:functional-180941
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.6s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.4s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:9e93a141e3c94d4c97accaadf1a0218fc4811d1e1f5ca60c9a30773d1580c37a done
#8 exporting config sha256:179cb6367364ffd735b6315494364d1904c2976cbd4121510c0324d018e957a4
#8 exporting config sha256:179cb6367364ffd735b6315494364d1904c2976cbd4121510c0324d018e957a4 done
#8 naming to localhost/my-image:functional-180941 done
#8 DONE 0.1s
I1219 02:34:21.613862  308588 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.4238472469 --local dockerfile=/var/lib/minikube/build/build.4238472469 --output type=image,name=localhost/my-image:functional-180941: (3.368672055s)
I1219 02:34:21.613959  308588 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4238472469
I1219 02:34:21.623989  308588 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4238472469.tar
I1219 02:34:21.633528  308588 build_images.go:218] Built localhost/my-image:functional-180941 from /tmp/build.4238472469.tar
I1219 02:34:21.633564  308588 build_images.go:134] succeeded building to: functional-180941
I1219 02:34:21.633571  308588 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.898161619s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-180941
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 image load --daemon kicbase/echo-server:functional-180941 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 image load --daemon kicbase/echo-server:functional-180941 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-180941
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 image load --daemon kicbase/echo-server:functional-180941 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.05s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-180941 /tmp/TestFunctionalparallelMountCmdspecific-port2709256228/001:/mount-9p --alsologtostderr -v=1 --port 36843]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:249: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-180941 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (306.637077ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1219 02:33:48.261591  257493 retry.go:31] will retry after 538.965002ms: exit status 1
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 ssh -- ls -la /mount-9p
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-180941 /tmp/TestFunctionalparallelMountCmdspecific-port2709256228/001:/mount-9p --alsologtostderr -v=1 --port 36843] ...
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-180941 ssh "sudo umount -f /mount-9p": exit status 1 (292.425799ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-amd64 -p functional-180941 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-180941 /tmp/TestFunctionalparallelMountCmdspecific-port2709256228/001:/mount-9p --alsologtostderr -v=1 --port 36843] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-180941 service list: (1.753752631s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 image save kicbase/echo-server:functional-180941 /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 image rm kicbase/echo-server:functional-180941 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-180941 /tmp/TestFunctionalparallelMountCmdVerifyCleanup437824581/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-180941 /tmp/TestFunctionalparallelMountCmdVerifyCleanup437824581/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-180941 /tmp/TestFunctionalparallelMountCmdVerifyCleanup437824581/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-180941 ssh "findmnt -T" /mount1: exit status 1 (436.867324ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1219 02:33:50.332599  257493 retry.go:31] will retry after 528.941646ms: exit status 1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 ssh "findmnt -T" /mount2
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-180941 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-180941 /tmp/TestFunctionalparallelMountCmdVerifyCleanup437824581/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-180941 /tmp/TestFunctionalparallelMountCmdVerifyCleanup437824581/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-180941 /tmp/TestFunctionalparallelMountCmdVerifyCleanup437824581/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-180941 service list -o json: (1.874021843s)
functional_test.go:1504: Took "1.874136317s" to run "out/minikube-linux-amd64 -p functional-180941 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-180941
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 image save --daemon kicbase/echo-server:functional-180941 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-180941
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30769
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-180941 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-180941 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-180941 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-180941 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 304008: os: process already finished
helpers_test.go:520: unable to terminate pid 303714: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30769
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-180941 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (23.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-180941 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [e77ecfa4-b7ec-4449-9875-5ff766083fa4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [e77ecfa4-b7ec-4449-9875-5ff766083fa4] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 23.003324965s
I1219 02:34:16.179051  257493 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (23.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-180941 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-180941 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.107.116.109 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-180941 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-180941
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-180941
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-180941
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22230-253859/.minikube/files/etc/test/nested/copy/257493/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (36.69s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-453239 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-453239 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1: (36.691024138s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (36.69s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (6.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart
I1219 02:35:16.620032  257493 config.go:182] Loaded profile config "functional-453239": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-453239 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-453239 --alsologtostderr -v=8: (6.058071454s)
functional_test.go:678: soft start took 6.058437465s for "functional-453239" cluster.
I1219 02:35:22.678496  257493 config.go:182] Loaded profile config "functional-453239": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (6.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-453239 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (2.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (2.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (2.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-453239 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialCacheC2981862348/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 cache add minikube-local-cache-test:functional-453239
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-453239 cache add minikube-local-cache-test:functional-453239: (1.733890407s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 cache delete minikube-local-cache-test:functional-453239
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-453239
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (2.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (1.58s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-453239 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (296.660133ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (1.58s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 kubectl -- --context functional-453239 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-453239 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (41.97s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-453239 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1219 02:35:56.660814  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/addons-367973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-453239 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.966133518s)
functional_test.go:776: restart took 41.966359263s for "functional-453239" cluster.
I1219 02:36:11.656222  257493 config.go:182] Loaded profile config "functional-453239": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (41.97s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-453239 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (1.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-453239 logs: (1.318716258s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (1.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (1.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialLogsFi1840802982/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-453239 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialLogsFi1840802982/001/logs.txt: (1.376099052s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (1.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (4.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-453239 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-453239
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-453239: exit status 115 (372.902138ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31198 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-453239 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-453239 delete -f testdata/invalidsvc.yaml: (1.004671597s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (4.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (0.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-453239 config get cpus: exit status 14 (95.014632ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-453239 config get cpus: exit status 14 (88.765592ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (0.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-453239 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-453239 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1: exit status 23 (198.405311ms)

                                                
                                                
-- stdout --
	* [functional-453239] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22230
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22230-253859/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-253859/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 02:36:44.545668  326368 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:36:44.545995  326368 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:36:44.546008  326368 out.go:374] Setting ErrFile to fd 2...
	I1219 02:36:44.546014  326368 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:36:44.546268  326368 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-253859/.minikube/bin
	I1219 02:36:44.547085  326368 out.go:368] Setting JSON to false
	I1219 02:36:44.548171  326368 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4744,"bootTime":1766107061,"procs":263,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 02:36:44.548231  326368 start.go:143] virtualization: kvm guest
	I1219 02:36:44.550285  326368 out.go:179] * [functional-453239] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 02:36:44.551530  326368 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 02:36:44.551550  326368 notify.go:221] Checking for updates...
	I1219 02:36:44.553689  326368 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 02:36:44.554699  326368 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-253859/kubeconfig
	I1219 02:36:44.555799  326368 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-253859/.minikube
	I1219 02:36:44.556838  326368 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 02:36:44.561208  326368 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 02:36:44.563394  326368 config.go:182] Loaded profile config "functional-453239": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1219 02:36:44.564134  326368 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 02:36:44.594709  326368 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 02:36:44.594890  326368 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 02:36:44.662518  326368 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-19 02:36:44.651008154 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 02:36:44.662682  326368 docker.go:319] overlay module found
	I1219 02:36:44.664368  326368 out.go:179] * Using the docker driver based on existing profile
	I1219 02:36:44.665395  326368 start.go:309] selected driver: docker
	I1219 02:36:44.665411  326368 start.go:928] validating driver "docker" against &{Name:functional-453239 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-453239 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 02:36:44.665513  326368 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 02:36:44.667051  326368 out.go:203] 
	W1219 02:36:44.668123  326368 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1219 02:36:44.669460  326368 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-453239 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-453239 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-453239 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1: exit status 23 (259.517079ms)

                                                
                                                
-- stdout --
	* [functional-453239] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22230
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22230-253859/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-253859/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 02:36:45.045876  326739 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:36:45.046038  326739 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:36:45.046045  326739 out.go:374] Setting ErrFile to fd 2...
	I1219 02:36:45.046052  326739 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:36:45.046567  326739 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-253859/.minikube/bin
	I1219 02:36:45.050712  326739 out.go:368] Setting JSON to false
	I1219 02:36:45.052268  326739 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4744,"bootTime":1766107061,"procs":276,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 02:36:45.052408  326739 start.go:143] virtualization: kvm guest
	I1219 02:36:45.055265  326739 out.go:179] * [functional-453239] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1219 02:36:45.056488  326739 notify.go:221] Checking for updates...
	I1219 02:36:45.056508  326739 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 02:36:45.057792  326739 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 02:36:45.060993  326739 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-253859/kubeconfig
	I1219 02:36:45.062484  326739 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-253859/.minikube
	I1219 02:36:45.067870  326739 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 02:36:45.069174  326739 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 02:36:45.070960  326739 config.go:182] Loaded profile config "functional-453239": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1219 02:36:45.072730  326739 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 02:36:45.105543  326739 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 02:36:45.105663  326739 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 02:36:45.182027  326739 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-19 02:36:45.169952659 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 02:36:45.182207  326739 docker.go:319] overlay module found
	I1219 02:36:45.184459  326739 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1219 02:36:45.185993  326739 start.go:309] selected driver: docker
	I1219 02:36:45.186016  326739 start.go:928] validating driver "docker" against &{Name:functional-453239 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-453239 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 02:36:45.186160  326739 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 02:36:45.188108  326739 out.go:203] 
	W1219 02:36:45.189052  326739 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1219 02:36:45.189961  326739 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (1.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (1.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (9.86s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-453239 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-453239 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-9f67c86d4-gz2l2" [f952be02-a90f-4fe5-8392-9aa37c100c7f] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-9f67c86d4-gz2l2" [f952be02-a90f-4fe5-8392-9aa37c100c7f] Running
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.003994361s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:30159
functional_test.go:1680: http://192.168.49.2:30159: success! body:
Request served by hello-node-connect-9f67c86d4-gz2l2

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:30159
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (9.86s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (27.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [75ea72eb-29f0-4764-96d8-fd34ef44b6b7] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00473191s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-453239 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-453239 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-453239 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-453239 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [a89ecdcb-4a3b-4478-bd5c-93a231f2853d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [a89ecdcb-4a3b-4478-bd5c-93a231f2853d] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.005214645s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-453239 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-453239 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-453239 apply -f testdata/storage-provisioner/pod.yaml
I1219 02:36:47.561133  257493 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [d85af8e2-426b-4304-b59a-4396fbe9bcfd] Pending
helpers_test.go:353: "sp-pod" [d85af8e2-426b-4304-b59a-4396fbe9bcfd] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [d85af8e2-426b-4304-b59a-4396fbe9bcfd] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003406511s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-453239 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (27.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (0.61s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (0.61s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (1.95s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 ssh -n functional-453239 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 cp functional-453239:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelCpCm3458805856/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 ssh -n functional-453239 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 ssh -n functional-453239 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (1.95s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (31.65s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-453239 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-5zm5q" [23b78f86-cf1f-477f-a259-197efe633ac6] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-7d7b65bc95-5zm5q" [23b78f86-cf1f-477f-a259-197efe633ac6] Running
I1219 02:36:33.503263  257493 detect.go:223] nested VM detected
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL: app=mysql healthy within 18.00430836s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-453239 exec mysql-7d7b65bc95-5zm5q -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-453239 exec mysql-7d7b65bc95-5zm5q -- mysql -ppassword -e "show databases;": exit status 1 (123.171313ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1219 02:36:39.585569  257493 retry.go:31] will retry after 573.852706ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-453239 exec mysql-7d7b65bc95-5zm5q -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-453239 exec mysql-7d7b65bc95-5zm5q -- mysql -ppassword -e "show databases;": exit status 1 (130.868092ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1219 02:36:40.291211  257493 retry.go:31] will retry after 1.739510349s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-453239 exec mysql-7d7b65bc95-5zm5q -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-453239 exec mysql-7d7b65bc95-5zm5q -- mysql -ppassword -e "show databases;": exit status 1 (167.490585ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1219 02:36:42.198622  257493 retry.go:31] will retry after 2.347470967s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-453239 exec mysql-7d7b65bc95-5zm5q -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-453239 exec mysql-7d7b65bc95-5zm5q -- mysql -ppassword -e "show databases;": exit status 1 (139.162615ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1219 02:36:44.685729  257493 retry.go:31] will retry after 1.854545918s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-453239 exec mysql-7d7b65bc95-5zm5q -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-453239 exec mysql-7d7b65bc95-5zm5q -- mysql -ppassword -e "show databases;": exit status 1 (148.358419ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1219 02:36:46.689606  257493 retry.go:31] will retry after 6.07754612s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-453239 exec mysql-7d7b65bc95-5zm5q -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (31.65s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/257493/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 ssh "sudo cat /etc/test/nested/copy/257493/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (1.91s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/257493.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 ssh "sudo cat /etc/ssl/certs/257493.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/257493.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 ssh "sudo cat /usr/share/ca-certificates/257493.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/2574932.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 ssh "sudo cat /etc/ssl/certs/2574932.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/2574932.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 ssh "sudo cat /usr/share/ca-certificates/2574932.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (1.91s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-453239 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.64s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-453239 ssh "sudo systemctl is-active docker": exit status 1 (329.356985ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-453239 ssh "sudo systemctl is-active crio": exit status 1 (305.76654ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.64s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (8.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-453239 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-453239 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-5758569b79-vfvcs" [1bf63500-a8cb-42a8-96eb-3247845d394e] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-5758569b79-vfvcs" [1bf63500-a8cb-42a8-96eb-3247845d394e] Running
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003987069s
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (8.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (0.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (0.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-453239 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-rc.1
registry.k8s.io/kube-proxy:v1.35.0-rc.1
registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
registry.k8s.io/kube-apiserver:v1.35.0-rc.1
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-453239
docker.io/kubernetesui/dashboard-auth:1.4.0
docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-453239
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-453239 image ls --format short --alsologtostderr:
I1219 02:36:54.001892  329039 out.go:360] Setting OutFile to fd 1 ...
I1219 02:36:54.002411  329039 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:36:54.002425  329039 out.go:374] Setting ErrFile to fd 2...
I1219 02:36:54.002432  329039 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:36:54.002927  329039 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-253859/.minikube/bin
I1219 02:36:54.003919  329039 config.go:182] Loaded profile config "functional-453239": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1219 02:36:54.004031  329039 config.go:182] Loaded profile config "functional-453239": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1219 02:36:54.004512  329039 cli_runner.go:164] Run: docker container inspect functional-453239 --format={{.State.Status}}
I1219 02:36:54.022918  329039 ssh_runner.go:195] Run: systemctl --version
I1219 02:36:54.022979  329039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-453239
I1219 02:36:54.041362  329039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/functional-453239/id_rsa Username:docker}
I1219 02:36:54.141746  329039 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-453239 image ls --format table --alsologtostderr:
┌──────────────────────────────────────────────────┬───────────────────────────────────────┬───────────────┬────────┐
│                      IMAGE                       │                  TAG                  │   IMAGE ID    │  SIZE  │
├──────────────────────────────────────────────────┼───────────────────────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-apiserver                   │ v1.35.0-rc.1                          │ sha256:588654 │ 27.7MB │
│ registry.k8s.io/kube-proxy                       │ v1.35.0-rc.1                          │ sha256:af0321 │ 25.8MB │
│ registry.k8s.io/pause                            │ 3.1                                   │ sha256:da86e6 │ 315kB  │
│ docker.io/kindest/kindnetd                       │ v20251212-v0.29.0-alpha-105-g20ccfc88 │ sha256:4921d7 │ 42.7MB │
│ docker.io/kubernetesui/dashboard-auth            │ 1.4.0                                 │ sha256:dd5437 │ 14.5MB │
│ gcr.io/k8s-minikube/storage-provisioner          │ v5                                    │ sha256:6e38f4 │ 9.06MB │
│ public.ecr.aws/nginx/nginx                       │ alpine                                │ sha256:04da2b │ 23MB   │
│ registry.k8s.io/pause                            │ 3.10.1                                │ sha256:cd073f │ 320kB  │
│ registry.k8s.io/pause                            │ 3.3                                   │ sha256:0184c1 │ 298kB  │
│ docker.io/kubernetesui/dashboard-metrics-scraper │ 1.2.2                                 │ sha256:d9cbc9 │ 13MB   │
│ public.ecr.aws/docker/library/mysql              │ 8.4                                   │ sha256:20d0be │ 233MB  │
│ registry.k8s.io/coredns/coredns                  │ v1.13.1                               │ sha256:aa5e3e │ 23.6MB │
│ registry.k8s.io/kube-controller-manager          │ v1.35.0-rc.1                          │ sha256:5032a5 │ 23.1MB │
│ registry.k8s.io/kube-scheduler                   │ v1.35.0-rc.1                          │ sha256:73f80c │ 17.2MB │
│ docker.io/kindest/kindnetd                       │ v20250512-df8de77b                    │ sha256:409467 │ 44.4MB │
│ docker.io/library/minikube-local-cache-test      │ functional-453239                     │ sha256:d0d30d │ 992B   │
│ gcr.io/k8s-minikube/busybox                      │ 1.28.4-glibc                          │ sha256:56cc51 │ 2.4MB  │
│ registry.k8s.io/pause                            │ latest                                │ sha256:350b16 │ 72.3kB │
│ docker.io/kicbase/echo-server                    │ functional-453239                     │ sha256:9056ab │ 2.37MB │
│ docker.io/kicbase/echo-server                    │ latest                                │ sha256:9056ab │ 2.37MB │
│ registry.k8s.io/etcd                             │ 3.6.6-0                               │ sha256:0a108f │ 23.6MB │
└──────────────────────────────────────────────────┴───────────────────────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-453239 image ls --format table --alsologtostderr:
I1219 02:36:55.970423  329475 out.go:360] Setting OutFile to fd 1 ...
I1219 02:36:55.970532  329475 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:36:55.970544  329475 out.go:374] Setting ErrFile to fd 2...
I1219 02:36:55.970550  329475 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:36:55.970775  329475 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-253859/.minikube/bin
I1219 02:36:55.971340  329475 config.go:182] Loaded profile config "functional-453239": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1219 02:36:55.971439  329475 config.go:182] Loaded profile config "functional-453239": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1219 02:36:55.971884  329475 cli_runner.go:164] Run: docker container inspect functional-453239 --format={{.State.Status}}
I1219 02:36:55.990241  329475 ssh_runner.go:195] Run: systemctl --version
I1219 02:36:55.990315  329475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-453239
I1219 02:36:56.009182  329475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/functional-453239/id_rsa Username:docker}
I1219 02:36:56.118146  329475 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-453239 image ls --format json --alsologtostderr:
[{"id":"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"23553139"},{"id":"sha256:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-rc.1"],"size":"17237597"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251","repoDigests":["docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c9
42b51cd57bdce1589940df856105384ac7f753a1ab43ae"],"repoTags":["docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88"],"size":"42673934"},{"id":"sha256:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a","repoDigests":["registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-rc.1"],"size":"25789553"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"320448"},{"id":"sha256:dd54374d0ab14ab65dd3d5d975f97d5b4aff7b221db479874a4429225b9b22b1","repoDigests":["docker.io/kubernetesui/dashboard-auth@sha256:53e9917898bf98ff2de91f7f9bdedd3545780eb3ac72158889ae0311
36e9eeff"],"repoTags":["docker.io/kubernetesui/dashboard-auth:1.4.0"],"size":"14450164"},{"id":"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2","repoDigests":["registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"23641797"},{"id":"sha256:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"],"size":"23134242"},{"id":"sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"44375501"},{"id":"sha256:d9cbc9f4053ca11fa6edb814b512ab807a89346934700a3a425fc1e24a3f2167","repoDigests":["docker.io/k
ubernetesui/dashboard-metrics-scraper@sha256:5154b68252bd601cf85092b6413cb9db224af1ef89cb53009d2070dfccd30775"],"repoTags":["docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2"],"size":"12969394"},{"id":"sha256:d0d30d0680597f758ef8a8a6f5f6d68deb68d33c25d4d1075208813f1bdb60a0","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-453239"],"size":"992"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e
5","repoDigests":["public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"22996569"},{"id":"sha256:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce","repoDigests":["registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-rc.1"],"size":"27686536"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6"],"repoTags":["docker.io/kicbase/echo-server:functional-453239","docker.io/kicbase/echo-server:latest"],"size":"2372971"},{"id":"sha256:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233"],"repoTags":["public.ecr.aws/doc
ker/library/mysql:8.4"],"size":"233030909"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-453239 image ls --format json --alsologtostderr:
I1219 02:36:55.734595  329419 out.go:360] Setting OutFile to fd 1 ...
I1219 02:36:55.734902  329419 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:36:55.734917  329419 out.go:374] Setting ErrFile to fd 2...
I1219 02:36:55.734923  329419 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:36:55.735140  329419 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-253859/.minikube/bin
I1219 02:36:55.735755  329419 config.go:182] Loaded profile config "functional-453239": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1219 02:36:55.735881  329419 config.go:182] Loaded profile config "functional-453239": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1219 02:36:55.736376  329419 cli_runner.go:164] Run: docker container inspect functional-453239 --format={{.State.Status}}
I1219 02:36:55.755747  329419 ssh_runner.go:195] Run: systemctl --version
I1219 02:36:55.755801  329419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-453239
I1219 02:36:55.773897  329419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/functional-453239/id_rsa Username:docker}
I1219 02:36:55.876865  329419 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-453239 image ls --format yaml --alsologtostderr:
- id: sha256:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-rc.1
size: "27686536"
- id: sha256:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
size: "23134242"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2
repoDigests:
- registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "23641797"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
repoTags:
- docker.io/kicbase/echo-server:functional-453239
- docker.io/kicbase/echo-server:latest
size: "2372971"
- id: sha256:4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251
repoDigests:
- docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae
repoTags:
- docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
size: "42673934"
- id: sha256:dd54374d0ab14ab65dd3d5d975f97d5b4aff7b221db479874a4429225b9b22b1
repoDigests:
- docker.io/kubernetesui/dashboard-auth@sha256:53e9917898bf98ff2de91f7f9bdedd3545780eb3ac72158889ae031136e9eeff
repoTags:
- docker.io/kubernetesui/dashboard-auth:1.4.0
size: "14450164"
- id: sha256:d0d30d0680597f758ef8a8a6f5f6d68deb68d33c25d4d1075208813f1bdb60a0
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-453239
size: "992"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "233030909"
- id: sha256:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "22996569"
- id: sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "44375501"
- id: sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "23553139"
- id: sha256:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a
repoDigests:
- registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-rc.1
size: "25789553"
- id: sha256:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-rc.1
size: "17237597"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "320448"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-453239 image ls --format yaml --alsologtostderr:
I1219 02:36:54.237340  329095 out.go:360] Setting OutFile to fd 1 ...
I1219 02:36:54.237488  329095 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:36:54.237503  329095 out.go:374] Setting ErrFile to fd 2...
I1219 02:36:54.237509  329095 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:36:54.237762  329095 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-253859/.minikube/bin
I1219 02:36:54.238368  329095 config.go:182] Loaded profile config "functional-453239": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1219 02:36:54.238466  329095 config.go:182] Loaded profile config "functional-453239": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1219 02:36:54.239080  329095 cli_runner.go:164] Run: docker container inspect functional-453239 --format={{.State.Status}}
I1219 02:36:54.258084  329095 ssh_runner.go:195] Run: systemctl --version
I1219 02:36:54.258137  329095 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-453239
I1219 02:36:54.277268  329095 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/functional-453239/id_rsa Username:docker}
I1219 02:36:54.379845  329095 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (3.97s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-453239 ssh pgrep buildkitd: exit status 1 (287.131419ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 image build -t localhost/my-image:functional-453239 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-453239 image build -t localhost/my-image:functional-453239 testdata/build --alsologtostderr: (3.443714427s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-453239 image build -t localhost/my-image:functional-453239 testdata/build --alsologtostderr:
I1219 02:36:54.765291  329258 out.go:360] Setting OutFile to fd 1 ...
I1219 02:36:54.765403  329258 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:36:54.765411  329258 out.go:374] Setting ErrFile to fd 2...
I1219 02:36:54.765415  329258 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:36:54.765636  329258 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-253859/.minikube/bin
I1219 02:36:54.766233  329258 config.go:182] Loaded profile config "functional-453239": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1219 02:36:54.766863  329258 config.go:182] Loaded profile config "functional-453239": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1219 02:36:54.767364  329258 cli_runner.go:164] Run: docker container inspect functional-453239 --format={{.State.Status}}
I1219 02:36:54.786658  329258 ssh_runner.go:195] Run: systemctl --version
I1219 02:36:54.786733  329258 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-453239
I1219 02:36:54.805449  329258 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/functional-453239/id_rsa Username:docker}
I1219 02:36:54.907911  329258 build_images.go:162] Building image from path: /tmp/build.2388124544.tar
I1219 02:36:54.907995  329258 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1219 02:36:54.916571  329258 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2388124544.tar
I1219 02:36:54.920451  329258 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2388124544.tar: stat -c "%s %y" /var/lib/minikube/build/build.2388124544.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2388124544.tar': No such file or directory
I1219 02:36:54.920500  329258 ssh_runner.go:362] scp /tmp/build.2388124544.tar --> /var/lib/minikube/build/build.2388124544.tar (3072 bytes)
I1219 02:36:54.939839  329258 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2388124544
I1219 02:36:54.948236  329258 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2388124544 -xf /var/lib/minikube/build/build.2388124544.tar
I1219 02:36:54.957109  329258 containerd.go:394] Building image: /var/lib/minikube/build/build.2388124544
I1219 02:36:54.957196  329258 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2388124544 --local dockerfile=/var/lib/minikube/build/build.2388124544 --output type=image,name=localhost/my-image:functional-453239
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.6s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.7s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.4s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:4118d7cf6670adc1d0580efd28affa2f17e731463776dec5d1e5d1d9f059ea9f done
#8 exporting config sha256:23fe8314a39d3dc4547dee352a5824fcd126b1cb94a656e1f41bcf3a302a7fa8 done
#8 naming to localhost/my-image:functional-453239 done
#8 DONE 0.1s
I1219 02:36:58.121784  329258 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2388124544 --local dockerfile=/var/lib/minikube/build/build.2388124544 --output type=image,name=localhost/my-image:functional-453239: (3.164554608s)
I1219 02:36:58.121870  329258 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2388124544
I1219 02:36:58.130606  329258 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2388124544.tar
I1219 02:36:58.139322  329258 build_images.go:218] Built localhost/my-image:functional-453239 from /tmp/build.2388124544.tar
I1219 02:36:58.139359  329258 build_images.go:134] succeeded building to: functional-453239
I1219 02:36:58.139363  329258 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (3.97s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (0.97s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-453239
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (0.97s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (1.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 image load --daemon kicbase/echo-server:functional-453239 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (1.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (1.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 image load --daemon kicbase/echo-server:functional-453239 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (1.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-453239 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-453239 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-453239 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 322456: os: process already finished
helpers_test.go:520: unable to terminate pid 322220: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-453239 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (2.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-453239
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 image load --daemon kicbase/echo-server:functional-453239 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-453239 image load --daemon kicbase/echo-server:functional-453239 --alsologtostderr: (1.274695432s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (2.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-453239 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup (19.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-453239 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [4f3cadab-f1dc-4730-81a2-12494de8105e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [4f3cadab-f1dc-4730-81a2-12494de8105e] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 19.004191766s
I1219 02:36:42.947864  257493 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup (19.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 image save kicbase/echo-server:functional-453239 /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 image rm kicbase/echo-server:functional-453239 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (0.75s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (0.75s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (0.58s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 service list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (0.58s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-453239
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 image save --daemon kicbase/echo-server:functional-453239 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-453239
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (0.58s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 service list -o json
functional_test.go:1504: Took "578.557638ms" to run "out/minikube-linux-amd64 -p functional-453239 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (0.58s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30490
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 service hello-node --url --format={{.IP}}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30490
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port (13.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-453239 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1037625296/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1766111789591545849" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1037625296/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1766111789591545849" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1037625296/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1766111789591545849" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1037625296/001/test-1766111789591545849
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-453239 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (335.673455ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1219 02:36:29.927591  257493 retry.go:31] will retry after 634.169493ms: exit status 1
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 19 02:36 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 19 02:36 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 19 02:36 test-1766111789591545849
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 ssh cat /mount-9p/test-1766111789591545849
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-453239 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:154: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [3eaf8e16-5e14-40cb-afb0-6c9738e7c82f] Pending
helpers_test.go:353: "busybox-mount" [3eaf8e16-5e14-40cb-afb0-6c9738e7c82f] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [3eaf8e16-5e14-40cb-afb0-6c9738e7c82f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [3eaf8e16-5e14-40cb-afb0-6c9738e7c82f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:154: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 10.004889634s
functional_test_mount_test.go:170: (dbg) Run:  kubectl --context functional-453239 logs busybox-mount
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-453239 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1037625296/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port (13.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port (2.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-453239 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3365955552/001:/mount-9p --alsologtostderr -v=1 --port 43335]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:249: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-453239 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (320.01493ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1219 02:36:43.123417  257493 retry.go:31] will retry after 585.797717ms: exit status 1
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 ssh -- ls -la /mount-9p
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-453239 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3365955552/001:/mount-9p --alsologtostderr -v=1 --port 43335] ...
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-453239 ssh "sudo umount -f /mount-9p": exit status 1 (326.275846ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-amd64 -p functional-453239 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-453239 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun3365955552/001:/mount-9p --alsologtostderr -v=1 --port 43335] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port (2.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-453239 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.105.197.198 is working!
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-453239 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "370.833667ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "70.58727ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "374.513977ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "77.421347ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup (1.94s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-453239 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1593327545/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-453239 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1593327545/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-453239 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1593327545/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-453239 ssh "findmnt -T" /mount1: exit status 1 (401.728189ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1219 02:36:45.255646  257493 retry.go:31] will retry after 496.32743ms: exit status 1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 ssh "findmnt -T" /mount2
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-453239 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-453239 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-453239 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1593327545/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-453239 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1593327545/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-453239 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1593327545/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup (1.94s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-453239
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-453239
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-453239
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (97.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1219 02:38:12.808820  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/addons-367973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:38:39.193634  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-180941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:38:39.198947  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-180941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:38:39.209310  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-180941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:38:39.229607  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-180941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:38:39.269843  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-180941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:38:39.350177  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-180941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:38:39.510568  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-180941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:38:39.831678  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-180941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:38:40.472666  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-180941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:38:40.501873  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/addons-367973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:38:41.753287  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-180941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:38:44.314134  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-180941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:38:49.434651  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-180941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-357085 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (1m36.703042117s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (97.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-357085 kubectl -- rollout status deployment/busybox: (3.90444489s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 kubectl -- exec busybox-7b57f96db7-dphfb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 kubectl -- exec busybox-7b57f96db7-n88p4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 kubectl -- exec busybox-7b57f96db7-qjj74 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 kubectl -- exec busybox-7b57f96db7-dphfb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 kubectl -- exec busybox-7b57f96db7-n88p4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 kubectl -- exec busybox-7b57f96db7-qjj74 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 kubectl -- exec busybox-7b57f96db7-dphfb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 kubectl -- exec busybox-7b57f96db7-n88p4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 kubectl -- exec busybox-7b57f96db7-qjj74 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 kubectl -- exec busybox-7b57f96db7-dphfb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 kubectl -- exec busybox-7b57f96db7-dphfb -- sh -c "ping -c 1 192.168.49.1"
E1219 02:38:59.674945  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-180941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 kubectl -- exec busybox-7b57f96db7-n88p4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 kubectl -- exec busybox-7b57f96db7-n88p4 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 kubectl -- exec busybox-7b57f96db7-qjj74 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 kubectl -- exec busybox-7b57f96db7-qjj74 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (27.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 node add --alsologtostderr -v 5
E1219 02:39:20.155859  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-180941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-357085 node add --alsologtostderr -v 5: (26.167735167s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (27.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-357085 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 cp testdata/cp-test.txt ha-357085:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 ssh -n ha-357085 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 cp ha-357085:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile708826203/001/cp-test_ha-357085.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 ssh -n ha-357085 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 cp ha-357085:/home/docker/cp-test.txt ha-357085-m02:/home/docker/cp-test_ha-357085_ha-357085-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 ssh -n ha-357085 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 ssh -n ha-357085-m02 "sudo cat /home/docker/cp-test_ha-357085_ha-357085-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 cp ha-357085:/home/docker/cp-test.txt ha-357085-m03:/home/docker/cp-test_ha-357085_ha-357085-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 ssh -n ha-357085 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 ssh -n ha-357085-m03 "sudo cat /home/docker/cp-test_ha-357085_ha-357085-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 cp ha-357085:/home/docker/cp-test.txt ha-357085-m04:/home/docker/cp-test_ha-357085_ha-357085-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 ssh -n ha-357085 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 ssh -n ha-357085-m04 "sudo cat /home/docker/cp-test_ha-357085_ha-357085-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 cp testdata/cp-test.txt ha-357085-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 ssh -n ha-357085-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 cp ha-357085-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile708826203/001/cp-test_ha-357085-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 ssh -n ha-357085-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 cp ha-357085-m02:/home/docker/cp-test.txt ha-357085:/home/docker/cp-test_ha-357085-m02_ha-357085.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 ssh -n ha-357085-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 ssh -n ha-357085 "sudo cat /home/docker/cp-test_ha-357085-m02_ha-357085.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 cp ha-357085-m02:/home/docker/cp-test.txt ha-357085-m03:/home/docker/cp-test_ha-357085-m02_ha-357085-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 ssh -n ha-357085-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 ssh -n ha-357085-m03 "sudo cat /home/docker/cp-test_ha-357085-m02_ha-357085-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 cp ha-357085-m02:/home/docker/cp-test.txt ha-357085-m04:/home/docker/cp-test_ha-357085-m02_ha-357085-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 ssh -n ha-357085-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 ssh -n ha-357085-m04 "sudo cat /home/docker/cp-test_ha-357085-m02_ha-357085-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 cp testdata/cp-test.txt ha-357085-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 ssh -n ha-357085-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 cp ha-357085-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile708826203/001/cp-test_ha-357085-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 ssh -n ha-357085-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 cp ha-357085-m03:/home/docker/cp-test.txt ha-357085:/home/docker/cp-test_ha-357085-m03_ha-357085.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 ssh -n ha-357085-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 ssh -n ha-357085 "sudo cat /home/docker/cp-test_ha-357085-m03_ha-357085.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 cp ha-357085-m03:/home/docker/cp-test.txt ha-357085-m02:/home/docker/cp-test_ha-357085-m03_ha-357085-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 ssh -n ha-357085-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 ssh -n ha-357085-m02 "sudo cat /home/docker/cp-test_ha-357085-m03_ha-357085-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 cp ha-357085-m03:/home/docker/cp-test.txt ha-357085-m04:/home/docker/cp-test_ha-357085-m03_ha-357085-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 ssh -n ha-357085-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 ssh -n ha-357085-m04 "sudo cat /home/docker/cp-test_ha-357085-m03_ha-357085-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 cp testdata/cp-test.txt ha-357085-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 ssh -n ha-357085-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 cp ha-357085-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile708826203/001/cp-test_ha-357085-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 ssh -n ha-357085-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 cp ha-357085-m04:/home/docker/cp-test.txt ha-357085:/home/docker/cp-test_ha-357085-m04_ha-357085.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 ssh -n ha-357085-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 ssh -n ha-357085 "sudo cat /home/docker/cp-test_ha-357085-m04_ha-357085.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 cp ha-357085-m04:/home/docker/cp-test.txt ha-357085-m02:/home/docker/cp-test_ha-357085-m04_ha-357085-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 ssh -n ha-357085-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 ssh -n ha-357085-m02 "sudo cat /home/docker/cp-test_ha-357085-m04_ha-357085-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 cp ha-357085-m04:/home/docker/cp-test.txt ha-357085-m03:/home/docker/cp-test_ha-357085-m04_ha-357085-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 ssh -n ha-357085-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 ssh -n ha-357085-m03 "sudo cat /home/docker/cp-test_ha-357085-m04_ha-357085-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-357085 node stop m02 --alsologtostderr -v 5: (12.03111735s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-357085 status --alsologtostderr -v 5: exit status 7 (723.113219ms)

                                                
                                                
-- stdout --
	ha-357085
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-357085-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-357085-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-357085-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 02:39:58.656186  352990 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:39:58.656283  352990 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:39:58.656291  352990 out.go:374] Setting ErrFile to fd 2...
	I1219 02:39:58.656295  352990 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:39:58.656499  352990 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-253859/.minikube/bin
	I1219 02:39:58.656691  352990 out.go:368] Setting JSON to false
	I1219 02:39:58.656717  352990 mustload.go:66] Loading cluster: ha-357085
	I1219 02:39:58.656865  352990 notify.go:221] Checking for updates...
	I1219 02:39:58.657090  352990 config.go:182] Loaded profile config "ha-357085": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 02:39:58.657105  352990 status.go:174] checking status of ha-357085 ...
	I1219 02:39:58.657551  352990 cli_runner.go:164] Run: docker container inspect ha-357085 --format={{.State.Status}}
	I1219 02:39:58.676449  352990 status.go:371] ha-357085 host status = "Running" (err=<nil>)
	I1219 02:39:58.676480  352990 host.go:66] Checking if "ha-357085" exists ...
	I1219 02:39:58.676830  352990 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-357085
	I1219 02:39:58.695426  352990 host.go:66] Checking if "ha-357085" exists ...
	I1219 02:39:58.695731  352990 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 02:39:58.695793  352990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-357085
	I1219 02:39:58.713684  352990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/ha-357085/id_rsa Username:docker}
	I1219 02:39:58.814990  352990 ssh_runner.go:195] Run: systemctl --version
	I1219 02:39:58.821615  352990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 02:39:58.834162  352990 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 02:39:58.890358  352990 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-19 02:39:58.87989809 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 02:39:58.891028  352990 kubeconfig.go:125] found "ha-357085" server: "https://192.168.49.254:8443"
	I1219 02:39:58.891059  352990 api_server.go:166] Checking apiserver status ...
	I1219 02:39:58.891098  352990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 02:39:58.903953  352990 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1403/cgroup
	W1219 02:39:58.913209  352990 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1403/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1219 02:39:58.913268  352990 ssh_runner.go:195] Run: ls
	I1219 02:39:58.917090  352990 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1219 02:39:58.922045  352990 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1219 02:39:58.922069  352990 status.go:463] ha-357085 apiserver status = Running (err=<nil>)
	I1219 02:39:58.922079  352990 status.go:176] ha-357085 status: &{Name:ha-357085 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1219 02:39:58.922094  352990 status.go:174] checking status of ha-357085-m02 ...
	I1219 02:39:58.922324  352990 cli_runner.go:164] Run: docker container inspect ha-357085-m02 --format={{.State.Status}}
	I1219 02:39:58.939933  352990 status.go:371] ha-357085-m02 host status = "Stopped" (err=<nil>)
	I1219 02:39:58.939985  352990 status.go:384] host is not running, skipping remaining checks
	I1219 02:39:58.939996  352990 status.go:176] ha-357085-m02 status: &{Name:ha-357085-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1219 02:39:58.940035  352990 status.go:174] checking status of ha-357085-m03 ...
	I1219 02:39:58.940322  352990 cli_runner.go:164] Run: docker container inspect ha-357085-m03 --format={{.State.Status}}
	I1219 02:39:58.959524  352990 status.go:371] ha-357085-m03 host status = "Running" (err=<nil>)
	I1219 02:39:58.959552  352990 host.go:66] Checking if "ha-357085-m03" exists ...
	I1219 02:39:58.959846  352990 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-357085-m03
	I1219 02:39:58.977455  352990 host.go:66] Checking if "ha-357085-m03" exists ...
	I1219 02:39:58.977813  352990 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 02:39:58.977868  352990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-357085-m03
	I1219 02:39:58.997965  352990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/ha-357085-m03/id_rsa Username:docker}
	I1219 02:39:59.099125  352990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 02:39:59.112115  352990 kubeconfig.go:125] found "ha-357085" server: "https://192.168.49.254:8443"
	I1219 02:39:59.112145  352990 api_server.go:166] Checking apiserver status ...
	I1219 02:39:59.112196  352990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 02:39:59.124251  352990 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1333/cgroup
	W1219 02:39:59.134011  352990 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1333/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1219 02:39:59.134074  352990 ssh_runner.go:195] Run: ls
	I1219 02:39:59.138075  352990 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1219 02:39:59.142356  352990 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1219 02:39:59.142381  352990 status.go:463] ha-357085-m03 apiserver status = Running (err=<nil>)
	I1219 02:39:59.142390  352990 status.go:176] ha-357085-m03 status: &{Name:ha-357085-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1219 02:39:59.142407  352990 status.go:174] checking status of ha-357085-m04 ...
	I1219 02:39:59.142817  352990 cli_runner.go:164] Run: docker container inspect ha-357085-m04 --format={{.State.Status}}
	I1219 02:39:59.160740  352990 status.go:371] ha-357085-m04 host status = "Running" (err=<nil>)
	I1219 02:39:59.160772  352990 host.go:66] Checking if "ha-357085-m04" exists ...
	I1219 02:39:59.161103  352990 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-357085-m04
	I1219 02:39:59.179483  352990 host.go:66] Checking if "ha-357085-m04" exists ...
	I1219 02:39:59.179767  352990 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 02:39:59.179806  352990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-357085-m04
	I1219 02:39:59.197476  352990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/ha-357085-m04/id_rsa Username:docker}
	I1219 02:39:59.298381  352990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 02:39:59.311044  352990 status.go:176] ha-357085-m04 status: &{Name:ha-357085-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (8.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 node start m02 --alsologtostderr -v 5
E1219 02:40:01.116678  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-180941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-357085 node start m02 --alsologtostderr -v 5: (7.712655312s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (8.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (96.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-357085 stop --alsologtostderr -v 5: (37.304031177s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 start --wait true --alsologtostderr -v 5
E1219 02:41:19.192298  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-453239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:41:19.198431  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-453239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:41:19.208770  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-453239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:41:19.229108  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-453239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:41:19.269406  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-453239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:41:19.349787  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-453239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:41:19.510224  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-453239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:41:19.830857  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-453239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:41:20.471542  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-453239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:41:21.752097  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-453239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:41:23.037472  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-180941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:41:24.313256  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-453239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:41:29.434284  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-453239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:41:39.674919  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-453239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-357085 start --wait true --alsologtostderr -v 5: (58.605490597s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (96.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-357085 node delete m03 --alsologtostderr -v 5: (8.763770132s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 stop --alsologtostderr -v 5
E1219 02:42:00.155446  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-453239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-357085 stop --alsologtostderr -v 5: (36.107531132s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-357085 status --alsologtostderr -v 5: exit status 7 (122.009334ms)

                                                
                                                
-- stdout --
	ha-357085
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-357085-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-357085-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 02:42:32.308597  369230 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:42:32.308848  369230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:42:32.308856  369230 out.go:374] Setting ErrFile to fd 2...
	I1219 02:42:32.308861  369230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:42:32.309071  369230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-253859/.minikube/bin
	I1219 02:42:32.309258  369230 out.go:368] Setting JSON to false
	I1219 02:42:32.309283  369230 mustload.go:66] Loading cluster: ha-357085
	I1219 02:42:32.309370  369230 notify.go:221] Checking for updates...
	I1219 02:42:32.309651  369230 config.go:182] Loaded profile config "ha-357085": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 02:42:32.309667  369230 status.go:174] checking status of ha-357085 ...
	I1219 02:42:32.310141  369230 cli_runner.go:164] Run: docker container inspect ha-357085 --format={{.State.Status}}
	I1219 02:42:32.330717  369230 status.go:371] ha-357085 host status = "Stopped" (err=<nil>)
	I1219 02:42:32.330768  369230 status.go:384] host is not running, skipping remaining checks
	I1219 02:42:32.330783  369230 status.go:176] ha-357085 status: &{Name:ha-357085 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1219 02:42:32.330825  369230 status.go:174] checking status of ha-357085-m02 ...
	I1219 02:42:32.331162  369230 cli_runner.go:164] Run: docker container inspect ha-357085-m02 --format={{.State.Status}}
	I1219 02:42:32.348935  369230 status.go:371] ha-357085-m02 host status = "Stopped" (err=<nil>)
	I1219 02:42:32.348967  369230 status.go:384] host is not running, skipping remaining checks
	I1219 02:42:32.348974  369230 status.go:176] ha-357085-m02 status: &{Name:ha-357085-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1219 02:42:32.349000  369230 status.go:174] checking status of ha-357085-m04 ...
	I1219 02:42:32.349246  369230 cli_runner.go:164] Run: docker container inspect ha-357085-m04 --format={{.State.Status}}
	I1219 02:42:32.367056  369230 status.go:371] ha-357085-m04 host status = "Stopped" (err=<nil>)
	I1219 02:42:32.367084  369230 status.go:384] host is not running, skipping remaining checks
	I1219 02:42:32.367092  369230 status.go:176] ha-357085-m04 status: &{Name:ha-357085-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (51.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1219 02:42:41.116724  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-453239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:43:12.809384  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/addons-367973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-357085 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (50.447037724s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (51.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (39.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 node add --control-plane --alsologtostderr -v 5
E1219 02:43:39.192738  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-180941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-357085 node add --control-plane --alsologtostderr -v 5: (38.107825264s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-357085 status --alsologtostderr -v 5
E1219 02:44:03.037284  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-453239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (39.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.93s)

                                                
                                    
x
+
TestJSONOutput/start/Command (41.35s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-824881 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-824881 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (41.349792291s)
--- PASS: TestJSONOutput/start/Command (41.35s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-824881 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-824881 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.87s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-824881 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-824881 --output=json --user=testUser: (5.86986994s)
--- PASS: TestJSONOutput/stop/Command (5.87s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-304108 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-304108 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (77.526624ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ea7deeb4-335c-463a-85b0-503cbd5012fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-304108] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c1283253-9af4-46a2-84b2-70493c7b14b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22230"}}
	{"specversion":"1.0","id":"2d84f752-31ed-4e87-8fab-e3992c77ef3a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f9c7d20f-66cd-4305-9d9b-637279a599e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22230-253859/kubeconfig"}}
	{"specversion":"1.0","id":"7d678ba1-c6db-4c86-9494-d8c55fedbe8f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-253859/.minikube"}}
	{"specversion":"1.0","id":"64e73b37-fce8-4f43-9772-5fc8af27e0df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"183496be-74ed-40be-90ff-08a896a5679c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"cd066aa7-754f-4aa5-b3af-13c7f2a5bf45","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-304108" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-304108
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (33.56s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-330665 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-330665 --network=: (31.39219639s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-330665" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-330665
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-330665: (2.145740189s)
--- PASS: TestKicCustomNetwork/create_custom_network (33.56s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.54s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-160465 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-160465 --network=bridge: (20.484067602s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-160465" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-160465
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-160465: (2.040733963s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.54s)

                                                
                                    
x
+
TestKicExistingNetwork (23.39s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1219 02:46:02.134252  257493 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1219 02:46:02.152491  257493 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1219 02:46:02.152562  257493 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1219 02:46:02.152596  257493 cli_runner.go:164] Run: docker network inspect existing-network
W1219 02:46:02.169078  257493 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1219 02:46:02.169115  257493 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1219 02:46:02.169136  257493 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1219 02:46:02.169260  257493 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1219 02:46:02.187141  257493 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-28e0d072336b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c6:be:f7:e6:b7:0b} reservation:<nil>}
I1219 02:46:02.187530  257493 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0053797c0}
I1219 02:46:02.187564  257493 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1219 02:46:02.187634  257493 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1219 02:46:02.234034  257493 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-061104 --network=existing-network
E1219 02:46:19.191882  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-453239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-061104 --network=existing-network: (21.211645408s)
helpers_test.go:176: Cleaning up "existing-network-061104" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-061104
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-061104: (2.040855987s)
I1219 02:46:25.505223  257493 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (23.39s)

                                                
                                    
x
+
TestKicCustomSubnet (23.09s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-260571 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-260571 --subnet=192.168.60.0/24: (20.930572746s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-260571 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-260571" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-260571
E1219 02:46:46.877707  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-453239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-260571: (2.135111662s)
--- PASS: TestKicCustomSubnet (23.09s)

                                                
                                    
x
+
TestKicStaticIP (23.65s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-561413 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-561413 --static-ip=192.168.200.200: (21.351105449s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-561413 ip
helpers_test.go:176: Cleaning up "static-ip-561413" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-561413
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-561413: (2.144140218s)
--- PASS: TestKicStaticIP (23.65s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (48.49s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-798890 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-798890 --driver=docker  --container-runtime=containerd: (22.369740543s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-801208 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-801208 --driver=docker  --container-runtime=containerd: (20.483590109s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-798890
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-801208
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-801208" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-801208
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p second-801208: (1.9840777s)
helpers_test.go:176: Cleaning up "first-798890" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-798890
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p first-798890: (2.397005812s)
--- PASS: TestMinikubeProfile (48.49s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (4.5s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-236845 --memory=3072 --mount-string /tmp/TestMountStartserial1979400866/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-236845 --memory=3072 --mount-string /tmp/TestMountStartserial1979400866/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (3.496475472s)
--- PASS: TestMountStart/serial/StartWithMountFirst (4.50s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-236845 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.41s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-251825 --memory=3072 --mount-string /tmp/TestMountStartserial1979400866/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-251825 --memory=3072 --mount-string /tmp/TestMountStartserial1979400866/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.404956268s)
E1219 02:48:12.809301  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/addons-367973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMountStart/serial/StartWithMountSecond (7.41s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-251825 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-236845 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-236845 --alsologtostderr -v=5: (1.68330241s)
--- PASS: TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-251825 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-251825
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-251825: (1.258682837s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-251825
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-251825: (7.001121938s)
--- PASS: TestMountStart/serial/RestartStopped (8.00s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-251825 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (68.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-630795 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1219 02:48:39.193707  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-180941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-630795 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m7.735971062s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630795 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (68.23s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-630795 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-630795 -- rollout status deployment/busybox
E1219 02:49:35.862460  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/addons-367973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-630795 -- rollout status deployment/busybox: (3.138397719s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-630795 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-630795 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-630795 -- exec busybox-7b57f96db7-ks7g6 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-630795 -- exec busybox-7b57f96db7-kslbd -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-630795 -- exec busybox-7b57f96db7-ks7g6 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-630795 -- exec busybox-7b57f96db7-kslbd -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-630795 -- exec busybox-7b57f96db7-ks7g6 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-630795 -- exec busybox-7b57f96db7-kslbd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.71s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-630795 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-630795 -- exec busybox-7b57f96db7-ks7g6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-630795 -- exec busybox-7b57f96db7-ks7g6 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-630795 -- exec busybox-7b57f96db7-kslbd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-630795 -- exec busybox-7b57f96db7-kslbd -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (26.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-630795 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-630795 -v=5 --alsologtostderr: (25.513785822s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630795 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (26.19s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-630795 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.69s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630795 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630795 cp testdata/cp-test.txt multinode-630795:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630795 ssh -n multinode-630795 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630795 cp multinode-630795:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile410792437/001/cp-test_multinode-630795.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630795 ssh -n multinode-630795 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630795 cp multinode-630795:/home/docker/cp-test.txt multinode-630795-m02:/home/docker/cp-test_multinode-630795_multinode-630795-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630795 ssh -n multinode-630795 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630795 ssh -n multinode-630795-m02 "sudo cat /home/docker/cp-test_multinode-630795_multinode-630795-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630795 cp multinode-630795:/home/docker/cp-test.txt multinode-630795-m03:/home/docker/cp-test_multinode-630795_multinode-630795-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630795 ssh -n multinode-630795 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630795 ssh -n multinode-630795-m03 "sudo cat /home/docker/cp-test_multinode-630795_multinode-630795-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630795 cp testdata/cp-test.txt multinode-630795-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630795 ssh -n multinode-630795-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630795 cp multinode-630795-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile410792437/001/cp-test_multinode-630795-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630795 ssh -n multinode-630795-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630795 cp multinode-630795-m02:/home/docker/cp-test.txt multinode-630795:/home/docker/cp-test_multinode-630795-m02_multinode-630795.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630795 ssh -n multinode-630795-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630795 ssh -n multinode-630795 "sudo cat /home/docker/cp-test_multinode-630795-m02_multinode-630795.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630795 cp multinode-630795-m02:/home/docker/cp-test.txt multinode-630795-m03:/home/docker/cp-test_multinode-630795-m02_multinode-630795-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630795 ssh -n multinode-630795-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630795 ssh -n multinode-630795-m03 "sudo cat /home/docker/cp-test_multinode-630795-m02_multinode-630795-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630795 cp testdata/cp-test.txt multinode-630795-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630795 ssh -n multinode-630795-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630795 cp multinode-630795-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile410792437/001/cp-test_multinode-630795-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630795 ssh -n multinode-630795-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630795 cp multinode-630795-m03:/home/docker/cp-test.txt multinode-630795:/home/docker/cp-test_multinode-630795-m03_multinode-630795.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630795 ssh -n multinode-630795-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630795 ssh -n multinode-630795 "sudo cat /home/docker/cp-test_multinode-630795-m03_multinode-630795.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630795 cp multinode-630795-m03:/home/docker/cp-test.txt multinode-630795-m02:/home/docker/cp-test_multinode-630795-m03_multinode-630795-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630795 ssh -n multinode-630795-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630795 ssh -n multinode-630795-m02 "sudo cat /home/docker/cp-test_multinode-630795-m03_multinode-630795-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.15s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630795 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-630795 node stop m03: (1.278875679s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630795 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-630795 status: exit status 7 (510.925141ms)

                                                
                                                
-- stdout --
	multinode-630795
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-630795-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-630795-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630795 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-630795 status --alsologtostderr: exit status 7 (507.447732ms)

                                                
                                                
-- stdout --
	multinode-630795
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-630795-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-630795-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 02:50:19.330126  431463 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:50:19.330230  431463 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:50:19.330237  431463 out.go:374] Setting ErrFile to fd 2...
	I1219 02:50:19.330247  431463 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:50:19.330449  431463 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-253859/.minikube/bin
	I1219 02:50:19.330662  431463 out.go:368] Setting JSON to false
	I1219 02:50:19.330692  431463 mustload.go:66] Loading cluster: multinode-630795
	I1219 02:50:19.330782  431463 notify.go:221] Checking for updates...
	I1219 02:50:19.331206  431463 config.go:182] Loaded profile config "multinode-630795": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 02:50:19.331231  431463 status.go:174] checking status of multinode-630795 ...
	I1219 02:50:19.331816  431463 cli_runner.go:164] Run: docker container inspect multinode-630795 --format={{.State.Status}}
	I1219 02:50:19.352289  431463 status.go:371] multinode-630795 host status = "Running" (err=<nil>)
	I1219 02:50:19.352315  431463 host.go:66] Checking if "multinode-630795" exists ...
	I1219 02:50:19.352675  431463 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-630795
	I1219 02:50:19.372235  431463 host.go:66] Checking if "multinode-630795" exists ...
	I1219 02:50:19.372480  431463 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 02:50:19.372528  431463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-630795
	I1219 02:50:19.389540  431463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/multinode-630795/id_rsa Username:docker}
	I1219 02:50:19.489093  431463 ssh_runner.go:195] Run: systemctl --version
	I1219 02:50:19.495644  431463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 02:50:19.508195  431463 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 02:50:19.561763  431463 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-19 02:50:19.552187275 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 02:50:19.562466  431463 kubeconfig.go:125] found "multinode-630795" server: "https://192.168.67.2:8443"
	I1219 02:50:19.562515  431463 api_server.go:166] Checking apiserver status ...
	I1219 02:50:19.562564  431463 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 02:50:19.574772  431463 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1312/cgroup
	W1219 02:50:19.583184  431463 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1312/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1219 02:50:19.583249  431463 ssh_runner.go:195] Run: ls
	I1219 02:50:19.587018  431463 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1219 02:50:19.592016  431463 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1219 02:50:19.592041  431463 status.go:463] multinode-630795 apiserver status = Running (err=<nil>)
	I1219 02:50:19.592052  431463 status.go:176] multinode-630795 status: &{Name:multinode-630795 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1219 02:50:19.592071  431463 status.go:174] checking status of multinode-630795-m02 ...
	I1219 02:50:19.592305  431463 cli_runner.go:164] Run: docker container inspect multinode-630795-m02 --format={{.State.Status}}
	I1219 02:50:19.609734  431463 status.go:371] multinode-630795-m02 host status = "Running" (err=<nil>)
	I1219 02:50:19.609760  431463 host.go:66] Checking if "multinode-630795-m02" exists ...
	I1219 02:50:19.610080  431463 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-630795-m02
	I1219 02:50:19.627142  431463 host.go:66] Checking if "multinode-630795-m02" exists ...
	I1219 02:50:19.627489  431463 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 02:50:19.627539  431463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-630795-m02
	I1219 02:50:19.644938  431463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32918 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/multinode-630795-m02/id_rsa Username:docker}
	I1219 02:50:19.744206  431463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 02:50:19.756422  431463 status.go:176] multinode-630795-m02 status: &{Name:multinode-630795-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1219 02:50:19.756459  431463 status.go:174] checking status of multinode-630795-m03 ...
	I1219 02:50:19.756772  431463 cli_runner.go:164] Run: docker container inspect multinode-630795-m03 --format={{.State.Status}}
	I1219 02:50:19.775485  431463 status.go:371] multinode-630795-m03 host status = "Stopped" (err=<nil>)
	I1219 02:50:19.775507  431463 status.go:384] host is not running, skipping remaining checks
	I1219 02:50:19.775514  431463 status.go:176] multinode-630795-m03 status: &{Name:multinode-630795-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.30s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630795 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-630795 node start m03 -v=5 --alsologtostderr: (6.249895026s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630795 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.01s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (69.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-630795
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-630795
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-630795: (25.065549829s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-630795 --wait=true -v=5 --alsologtostderr
E1219 02:51:19.191563  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-453239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-630795 --wait=true -v=5 --alsologtostderr: (43.977361177s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-630795
--- PASS: TestMultiNode/serial/RestartKeepsNodes (69.18s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630795 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-630795 node delete m03: (4.72970588s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630795 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.36s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630795 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-630795 stop: (23.861050751s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630795 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-630795 status: exit status 7 (105.47639ms)

                                                
                                                
-- stdout --
	multinode-630795
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-630795-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630795 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-630795 status --alsologtostderr: exit status 7 (107.850162ms)

                                                
                                                
-- stdout --
	multinode-630795
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-630795-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 02:52:05.360337  441228 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:52:05.360686  441228 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:52:05.360698  441228 out.go:374] Setting ErrFile to fd 2...
	I1219 02:52:05.360702  441228 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:52:05.360963  441228 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-253859/.minikube/bin
	I1219 02:52:05.361208  441228 out.go:368] Setting JSON to false
	I1219 02:52:05.361238  441228 mustload.go:66] Loading cluster: multinode-630795
	I1219 02:52:05.361385  441228 notify.go:221] Checking for updates...
	I1219 02:52:05.361766  441228 config.go:182] Loaded profile config "multinode-630795": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 02:52:05.361789  441228 status.go:174] checking status of multinode-630795 ...
	I1219 02:52:05.362407  441228 cli_runner.go:164] Run: docker container inspect multinode-630795 --format={{.State.Status}}
	I1219 02:52:05.383958  441228 status.go:371] multinode-630795 host status = "Stopped" (err=<nil>)
	I1219 02:52:05.383983  441228 status.go:384] host is not running, skipping remaining checks
	I1219 02:52:05.383990  441228 status.go:176] multinode-630795 status: &{Name:multinode-630795 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1219 02:52:05.384014  441228 status.go:174] checking status of multinode-630795-m02 ...
	I1219 02:52:05.384299  441228 cli_runner.go:164] Run: docker container inspect multinode-630795-m02 --format={{.State.Status}}
	I1219 02:52:05.403253  441228 status.go:371] multinode-630795-m02 host status = "Stopped" (err=<nil>)
	I1219 02:52:05.403292  441228 status.go:384] host is not running, skipping remaining checks
	I1219 02:52:05.403302  441228 status.go:176] multinode-630795-m02 status: &{Name:multinode-630795-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.07s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (44.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-630795 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-630795 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (43.677864395s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-630795 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (44.28s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (24.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-630795
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-630795-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-630795-m02 --driver=docker  --container-runtime=containerd: exit status 14 (77.611289ms)

                                                
                                                
-- stdout --
	* [multinode-630795-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22230
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22230-253859/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-253859/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-630795-m02' is duplicated with machine name 'multinode-630795-m02' in profile 'multinode-630795'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-630795-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-630795-m03 --driver=docker  --container-runtime=containerd: (21.869925713s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-630795
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-630795: exit status 80 (304.51438ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-630795 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-630795-m03 already exists in multinode-630795-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-630795-m03
E1219 02:53:12.809314  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/addons-367973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-630795-m03: (2.39127755s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (24.71s)

                                                
                                    
x
+
TestPreload (109.63s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-437011 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd
E1219 02:53:39.193420  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-180941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-437011 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd: (46.591779999s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-437011 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 -p test-preload-437011 image pull gcr.io/k8s-minikube/busybox: (2.825253086s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-437011
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-437011: (6.745032692s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-437011 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E1219 02:55:02.239239  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-180941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-437011 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (50.831276792s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-437011 image list
helpers_test.go:176: Cleaning up "test-preload-437011" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-437011
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-437011: (2.404164386s)
--- PASS: TestPreload (109.63s)

                                                
                                    
x
+
TestScheduledStopUnix (95.49s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-709255 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-709255 --memory=3072 --driver=docker  --container-runtime=containerd: (19.113794434s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-709255 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1219 02:55:27.483733  459414 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:55:27.483834  459414 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:55:27.483842  459414 out.go:374] Setting ErrFile to fd 2...
	I1219 02:55:27.483846  459414 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:55:27.484044  459414 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-253859/.minikube/bin
	I1219 02:55:27.484259  459414 out.go:368] Setting JSON to false
	I1219 02:55:27.484349  459414 mustload.go:66] Loading cluster: scheduled-stop-709255
	I1219 02:55:27.484651  459414 config.go:182] Loaded profile config "scheduled-stop-709255": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 02:55:27.484716  459414 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/scheduled-stop-709255/config.json ...
	I1219 02:55:27.484896  459414 mustload.go:66] Loading cluster: scheduled-stop-709255
	I1219 02:55:27.484997  459414 config.go:182] Loaded profile config "scheduled-stop-709255": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-709255 -n scheduled-stop-709255
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-709255 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1219 02:55:27.878995  459565 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:55:27.879302  459565 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:55:27.879315  459565 out.go:374] Setting ErrFile to fd 2...
	I1219 02:55:27.879322  459565 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:55:27.879556  459565 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-253859/.minikube/bin
	I1219 02:55:27.879952  459565 out.go:368] Setting JSON to false
	I1219 02:55:27.880221  459565 daemonize_unix.go:73] killing process 459449 as it is an old scheduled stop
	I1219 02:55:27.880339  459565 mustload.go:66] Loading cluster: scheduled-stop-709255
	I1219 02:55:27.880767  459565 config.go:182] Loaded profile config "scheduled-stop-709255": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 02:55:27.880864  459565 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/scheduled-stop-709255/config.json ...
	I1219 02:55:27.881073  459565 mustload.go:66] Loading cluster: scheduled-stop-709255
	I1219 02:55:27.881221  459565 config.go:182] Loaded profile config "scheduled-stop-709255": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1219 02:55:27.886222  257493 retry.go:31] will retry after 137.807µs: open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/scheduled-stop-709255/pid: no such file or directory
I1219 02:55:27.887371  257493 retry.go:31] will retry after 82.437µs: open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/scheduled-stop-709255/pid: no such file or directory
I1219 02:55:27.888517  257493 retry.go:31] will retry after 235.349µs: open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/scheduled-stop-709255/pid: no such file or directory
I1219 02:55:27.889627  257493 retry.go:31] will retry after 223.927µs: open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/scheduled-stop-709255/pid: no such file or directory
I1219 02:55:27.890769  257493 retry.go:31] will retry after 589.815µs: open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/scheduled-stop-709255/pid: no such file or directory
I1219 02:55:27.891894  257493 retry.go:31] will retry after 887.901µs: open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/scheduled-stop-709255/pid: no such file or directory
I1219 02:55:27.893029  257493 retry.go:31] will retry after 570.517µs: open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/scheduled-stop-709255/pid: no such file or directory
I1219 02:55:27.894151  257493 retry.go:31] will retry after 1.348539ms: open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/scheduled-stop-709255/pid: no such file or directory
I1219 02:55:27.896389  257493 retry.go:31] will retry after 1.957796ms: open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/scheduled-stop-709255/pid: no such file or directory
I1219 02:55:27.898596  257493 retry.go:31] will retry after 3.701968ms: open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/scheduled-stop-709255/pid: no such file or directory
I1219 02:55:27.902860  257493 retry.go:31] will retry after 8.352298ms: open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/scheduled-stop-709255/pid: no such file or directory
I1219 02:55:27.912119  257493 retry.go:31] will retry after 6.09849ms: open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/scheduled-stop-709255/pid: no such file or directory
I1219 02:55:27.918301  257493 retry.go:31] will retry after 11.192137ms: open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/scheduled-stop-709255/pid: no such file or directory
I1219 02:55:27.930605  257493 retry.go:31] will retry after 16.891694ms: open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/scheduled-stop-709255/pid: no such file or directory
I1219 02:55:27.947891  257493 retry.go:31] will retry after 29.83639ms: open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/scheduled-stop-709255/pid: no such file or directory
I1219 02:55:27.978162  257493 retry.go:31] will retry after 34.434251ms: open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/scheduled-stop-709255/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-709255 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-709255 -n scheduled-stop-709255
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-709255
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-709255 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1219 02:55:53.794558  460461 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:55:53.794839  460461 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:55:53.794848  460461 out.go:374] Setting ErrFile to fd 2...
	I1219 02:55:53.794852  460461 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:55:53.795035  460461 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-253859/.minikube/bin
	I1219 02:55:53.795245  460461 out.go:368] Setting JSON to false
	I1219 02:55:53.795319  460461 mustload.go:66] Loading cluster: scheduled-stop-709255
	I1219 02:55:53.795640  460461 config.go:182] Loaded profile config "scheduled-stop-709255": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 02:55:53.795715  460461 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/scheduled-stop-709255/config.json ...
	I1219 02:55:53.795916  460461 mustload.go:66] Loading cluster: scheduled-stop-709255
	I1219 02:55:53.796009  460461 config.go:182] Loaded profile config "scheduled-stop-709255": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1219 02:56:19.196515  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-453239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-709255
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-709255: exit status 7 (80.207267ms)

                                                
                                                
-- stdout --
	scheduled-stop-709255
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-709255 -n scheduled-stop-709255
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-709255 -n scheduled-stop-709255: exit status 7 (79.967844ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-709255" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-709255
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-709255: (4.82601375s)
--- PASS: TestScheduledStopUnix (95.49s)

                                                
                                    
x
+
TestInsufficientStorage (11.65s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-297182 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-297182 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (9.17010949s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"906e1386-fe07-46d7-b6c3-09464bc07093","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-297182] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f1d2bbea-89a7-4b85-8a6d-10797c0ad290","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22230"}}
	{"specversion":"1.0","id":"94c0ede8-fe40-475c-8a2e-f4f95802b828","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f8df7a9e-74e5-44d9-90ad-78f96e8f8b5e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22230-253859/kubeconfig"}}
	{"specversion":"1.0","id":"8c6a4cac-166c-4a09-ba5c-a016e0cf8a84","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-253859/.minikube"}}
	{"specversion":"1.0","id":"d713add6-1d6e-40a2-b3b2-dfa03ef0c1e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"68648f9a-f36f-4526-a992-0bfd0e933f09","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d97a8cfd-b2dc-488d-9e0d-f5fa84c62598","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"e18b094d-cafc-4b47-9472-f1e47dfdc7cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"30ebb36b-459a-4b43-8a97-723f91b90e91","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"df5a675c-9269-4e49-8d4f-53c80416f99b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"63697c46-e6ab-4c9b-a303-11777417a383","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-297182\" primary control-plane node in \"insufficient-storage-297182\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"69dbe039-6281-4107-b279-6c12bbb27972","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1765966054-22186 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"bd9297ab-880f-4fa5-b1cb-3b6fbdd70507","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"e6b4bfcd-a967-4f8c-af18-de33bb7d20e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-297182 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-297182 --output=json --layout=cluster: exit status 7 (290.768936ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-297182","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-297182","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1219 02:56:53.244916  462753 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-297182" does not appear in /home/jenkins/minikube-integration/22230-253859/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-297182 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-297182 --output=json --layout=cluster: exit status 7 (290.649898ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-297182","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-297182","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1219 02:56:53.536358  462868 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-297182" does not appear in /home/jenkins/minikube-integration/22230-253859/kubeconfig
	E1219 02:56:53.546954  462868 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/insufficient-storage-297182/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-297182" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-297182
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-297182: (1.893437319s)
--- PASS: TestInsufficientStorage (11.65s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (298.38s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.726790316 start -p running-upgrade-959579 --memory=3072 --vm-driver=docker  --container-runtime=containerd
E1219 02:57:42.238922  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-453239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.726790316 start -p running-upgrade-959579 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (20.949549531s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-959579 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-959579 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m31.542243856s)
helpers_test.go:176: Cleaning up "running-upgrade-959579" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-959579
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-959579: (2.107980344s)
--- PASS: TestRunningBinaryUpgrade (298.38s)

                                                
                                    
x
+
TestKubernetesUpgrade (307.26s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-340572 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1219 02:58:39.193124  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-180941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-340572 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (23.18737694s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-340572
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-340572: (1.295938432s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-340572 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-340572 status --format={{.Host}}: exit status 7 (92.148282ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-340572 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-340572 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m28.713342998s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-340572 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-340572 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-340572 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (713.482319ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-340572] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22230
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22230-253859/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-253859/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-rc.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-340572
	    minikube start -p kubernetes-upgrade-340572 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3405722 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-rc.1, by running:
	    
	    minikube start -p kubernetes-upgrade-340572 --kubernetes-version=v1.35.0-rc.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-340572 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-340572 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (10.932308108s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-340572" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-340572
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-340572: (2.232009652s)
--- PASS: TestKubernetesUpgrade (307.26s)

                                                
                                    
x
+
TestMissingContainerUpgrade (91.19s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.225199569 start -p missing-upgrade-031462 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.225199569 start -p missing-upgrade-031462 --memory=3072 --driver=docker  --container-runtime=containerd: (20.407895388s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-031462
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-031462
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-031462 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-031462 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m4.105925168s)
helpers_test.go:176: Cleaning up "missing-upgrade-031462" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-031462
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-031462: (1.958921606s)
--- PASS: TestMissingContainerUpgrade (91.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.76s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.76s)

                                                
                                    
x
+
TestPause/serial/Start (56.43s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-536320 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-536320 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (56.425058955s)
--- PASS: TestPause/serial/Start (56.43s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (327.77s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.2376555746 start -p stopped-upgrade-559723 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.2376555746 start -p stopped-upgrade-559723 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (52.302465942s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.2376555746 -p stopped-upgrade-559723 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.2376555746 -p stopped-upgrade-559723 stop: (1.233200305s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-559723 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-559723 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m34.238581091s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (327.77s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.3s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-536320 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-536320 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.284161482s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.30s)

                                                
                                    
x
+
TestPause/serial/Pause (0.79s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-536320 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.79s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.47s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-536320 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-536320 --output=json --layout=cluster: exit status 2 (466.78622ms)

                                                
                                                
-- stdout --
	{"Name":"pause-536320","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-536320","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.47s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.97s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-536320 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.97s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.93s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-536320 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.93s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (4.64s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-536320 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-536320 --alsologtostderr -v=5: (4.637452779s)
--- PASS: TestPause/serial/DeletePaused (4.64s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (18.87s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E1219 02:58:12.808803  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/addons-367973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (18.812917499s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-536320
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-536320: exit status 1 (16.127857ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-536320: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (18.87s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.56s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-559723
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-559723: (2.555751603s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-821572 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-821572 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (92.038648ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-821572] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22230
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22230-253859/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-253859/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (26.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-821572 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-821572 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (25.667622823s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-821572 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (26.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-821572 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-821572 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (6.32950635s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-821572 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-821572 status -o json: exit status 2 (360.480586ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-821572","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-821572
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-821572: (2.176779672s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-289681 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-289681 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (191.160371ms)

                                                
                                                
-- stdout --
	* [false-289681] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22230
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22230-253859/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-253859/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 03:03:04.251318  539241 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:03:04.251692  539241 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:03:04.251705  539241 out.go:374] Setting ErrFile to fd 2...
	I1219 03:03:04.251710  539241 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:03:04.251976  539241 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-253859/.minikube/bin
	I1219 03:03:04.252631  539241 out.go:368] Setting JSON to false
	I1219 03:03:04.253894  539241 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6323,"bootTime":1766107061,"procs":284,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 03:03:04.253991  539241 start.go:143] virtualization: kvm guest
	I1219 03:03:04.256526  539241 out.go:179] * [false-289681] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 03:03:04.258076  539241 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 03:03:04.258116  539241 notify.go:221] Checking for updates...
	I1219 03:03:04.261225  539241 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 03:03:04.262766  539241 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-253859/kubeconfig
	I1219 03:03:04.264346  539241 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-253859/.minikube
	I1219 03:03:04.265622  539241 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 03:03:04.266991  539241 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 03:03:04.269181  539241 config.go:182] Loaded profile config "NoKubernetes-821572": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v0.0.0
	I1219 03:03:04.269317  539241 config.go:182] Loaded profile config "cert-options-967008": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 03:03:04.269442  539241 config.go:182] Loaded profile config "kubernetes-upgrade-340572": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1219 03:03:04.269601  539241 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 03:03:04.296093  539241 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 03:03:04.296270  539241 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:03:04.357678  539241 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:82 SystemTime:2025-12-19 03:03:04.347605745 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:03:04.357833  539241 docker.go:319] overlay module found
	I1219 03:03:04.360447  539241 out.go:179] * Using the docker driver based on user configuration
	I1219 03:03:04.361746  539241 start.go:309] selected driver: docker
	I1219 03:03:04.361768  539241 start.go:928] validating driver "docker" against <nil>
	I1219 03:03:04.361784  539241 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 03:03:04.363981  539241 out.go:203] 
	W1219 03:03:04.365447  539241 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1219 03:03:04.366382  539241 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-289681 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-289681

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-289681

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-289681

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-289681

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-289681

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-289681

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-289681

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-289681

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-289681

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-289681

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-289681"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-289681"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-289681"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-289681

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-289681"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-289681"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-289681" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-289681" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-289681" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-289681" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-289681" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-289681" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-289681" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-289681" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-289681"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-289681"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-289681"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-289681"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-289681"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-289681" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-289681" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-289681" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-289681"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-289681"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-289681"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-289681"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-289681"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22230-253859/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 19 Dec 2025 03:02:58 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: NoKubernetes-821572
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22230-253859/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 19 Dec 2025 02:59:00 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-340572
contexts:
- context:
cluster: NoKubernetes-821572
extensions:
- extension:
last-update: Fri, 19 Dec 2025 03:02:58 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-821572
name: NoKubernetes-821572
- context:
cluster: kubernetes-upgrade-340572
user: kubernetes-upgrade-340572
name: kubernetes-upgrade-340572
current-context: NoKubernetes-821572
kind: Config
users:
- name: NoKubernetes-821572
user:
client-certificate: /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/NoKubernetes-821572/client.crt
client-key: /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/NoKubernetes-821572/client.key
- name: kubernetes-upgrade-340572
user:
client-certificate: /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/kubernetes-upgrade-340572/client.crt
client-key: /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/kubernetes-upgrade-340572/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-289681

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-289681"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-289681"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-289681"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-289681"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-289681"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-289681"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-289681"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-289681"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-289681"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-289681"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-289681"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-289681"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-289681"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-289681"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-289681"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-289681"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-289681"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-289681"

                                                
                                                
----------------------- debugLogs end: false-289681 [took: 3.966731975s] --------------------------------
helpers_test.go:176: Cleaning up "false-289681" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-289681
--- PASS: TestNetworkPlugins/group/false (4.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (4.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-821572 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1219 03:03:12.808726  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/addons-367973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-821572 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (4.877893409s)
--- PASS: TestNoKubernetes/serial/Start (4.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22230-253859/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-821572 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-821572 "sudo systemctl is-active --quiet service kubelet": exit status 1 (401.184639ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (57.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-002036 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-002036 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (57.080192471s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (57.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (9.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (8.692053276s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:204: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (1.176819792s)
--- PASS: TestNoKubernetes/serial/ProfileList (9.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-821572
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-821572: (2.098890173s)
--- PASS: TestNoKubernetes/serial/Stop (2.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-821572 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-821572 --driver=docker  --container-runtime=containerd: (7.390344411s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (52.59s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-208281 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-208281 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1: (52.59127722s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (52.59s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (45.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-536489 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-536489 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3: (45.034859832s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (45.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-821572 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-821572 "sudo systemctl is-active --quiet service kubelet": exit status 1 (411.600261ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (50.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-103644 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3
E1219 03:03:39.193169  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-180941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-103644 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3: (50.582740714s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (50.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-002036 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [9de888c7-fb93-4e9d-a535-31b7b29f921f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [9de888c7-fb93-4e9d-a535-31b7b29f921f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003597694s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-002036 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-536489 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [79b035a8-ae4b-4a1f-8458-4fe0f7d4ebef] Pending
helpers_test.go:353: "busybox" [79b035a8-ae4b-4a1f-8458-4fe0f7d4ebef] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [79b035a8-ae4b-4a1f-8458-4fe0f7d4ebef] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004345287s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-536489 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-208281 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [d1aebe15-3209-41e5-9992-da4b0690a286] Pending
helpers_test.go:353: "busybox" [d1aebe15-3209-41e5-9992-da4b0690a286] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [d1aebe15-3209-41e5-9992-da4b0690a286] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003598899s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-208281 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-002036 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-002036 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-002036 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-002036 --alsologtostderr -v=3: (12.077301591s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-536489 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-536489 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-536489 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-536489 --alsologtostderr -v=3: (12.087805377s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-103644 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [97d1d235-bad2-4304-8138-0d5f860d9a2a] Pending
helpers_test.go:353: "busybox" [97d1d235-bad2-4304-8138-0d5f860d9a2a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [97d1d235-bad2-4304-8138-0d5f860d9a2a] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004546196s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-103644 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-208281 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-208281 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-208281 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-208281 --alsologtostderr -v=3: (12.173492348s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-002036 -n old-k8s-version-002036
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-002036 -n old-k8s-version-002036: exit status 7 (84.41252ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-002036 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (48.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-002036 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-002036 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (47.807860895s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-002036 -n old-k8s-version-002036
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (48.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-103644 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-103644 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-103644 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-103644 --alsologtostderr -v=3: (12.443541628s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-536489 -n embed-certs-536489
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-536489 -n embed-certs-536489: exit status 7 (94.727698ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-536489 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (375.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-536489 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-536489 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3: (6m15.14025173s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-536489 -n embed-certs-536489
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (375.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-208281 -n no-preload-208281
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-208281 -n no-preload-208281: exit status 7 (91.709306ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-208281 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (377.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-208281 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-208281 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1: (6m17.118074403s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-208281 -n no-preload-208281
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (377.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-103644 -n default-k8s-diff-port-103644
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-103644 -n default-k8s-diff-port-103644: exit status 7 (156.599934ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-103644 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (378.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-103644 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-103644 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.3: (6m17.74188912s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-103644 -n default-k8s-diff-port-103644
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (378.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-002036 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-002036 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-002036 -n old-k8s-version-002036
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-002036 -n old-k8s-version-002036: exit status 2 (325.118906ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-002036 -n old-k8s-version-002036
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-002036 -n old-k8s-version-002036: exit status 2 (331.433185ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-002036 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-002036 -n old-k8s-version-002036
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-002036 -n old-k8s-version-002036
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (23.62s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-017890 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
E1219 03:23:39.193404  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-180941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-017890 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1: (23.617042167s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (23.62s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-017890 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.77s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-017890 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-017890 --alsologtostderr -v=3: (1.425163293s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-017890 -n newest-cni-017890
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-017890 -n newest-cni-017890: exit status 7 (86.705715ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-017890 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (374.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-017890 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
E1219 03:24:11.187632  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/old-k8s-version-002036/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:24:11.192982  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/old-k8s-version-002036/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:24:11.203289  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/old-k8s-version-002036/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:24:11.223626  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/old-k8s-version-002036/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:24:11.263943  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/old-k8s-version-002036/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:24:11.344304  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/old-k8s-version-002036/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:24:11.505121  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/old-k8s-version-002036/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:24:11.825697  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/old-k8s-version-002036/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:24:12.466907  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/old-k8s-version-002036/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:24:13.747587  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/old-k8s-version-002036/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:24:16.308710  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/old-k8s-version-002036/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:24:21.429664  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/old-k8s-version-002036/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:24:31.670453  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/old-k8s-version-002036/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:24:52.151359  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/old-k8s-version-002036/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:25:33.112121  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/old-k8s-version-002036/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:26:19.191537  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-453239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:26:55.032842  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/old-k8s-version-002036/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:28:12.808959  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/addons-367973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:28:22.241440  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-180941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:28:39.193274  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-180941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-017890 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1: (6m13.724576211s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-017890 -n newest-cni-017890
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (374.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-536489 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: library/kong:3.9
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-536489 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-536489 -n embed-certs-536489
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-536489 -n embed-certs-536489: exit status 2 (343.153486ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-536489 -n embed-certs-536489
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-536489 -n embed-certs-536489: exit status 2 (354.25903ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-536489 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-536489 -n embed-certs-536489
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-536489 -n embed-certs-536489
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-208281 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: library/kong:3.9
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-208281 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-208281 -n no-preload-208281
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-208281 -n no-preload-208281: exit status 2 (357.680577ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-208281 -n no-preload-208281
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-208281 -n no-preload-208281: exit status 2 (354.840304ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-208281 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-208281 -n no-preload-208281
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-208281 -n no-preload-208281
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (45.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-289681 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-289681 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (45.073184575s)
--- PASS: TestNetworkPlugins/group/auto/Start (45.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (49.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-289681 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-289681 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (49.80682919s)
--- PASS: TestNetworkPlugins/group/calico/Start (49.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-103644 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-103644 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-103644 -n default-k8s-diff-port-103644
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-103644 -n default-k8s-diff-port-103644: exit status 2 (389.6218ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-103644 -n default-k8s-diff-port-103644
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-103644 -n default-k8s-diff-port-103644: exit status 2 (368.753022ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-103644 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-103644 -n default-k8s-diff-port-103644
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-103644 -n default-k8s-diff-port-103644
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (54.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-289681 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E1219 03:29:22.682122  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/no-preload-208281/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:29:25.243251  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/no-preload-208281/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:29:28.544260  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:29:28.549559  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:29:28.560263  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:29:28.581305  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:29:28.621849  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:29:28.702352  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:29:28.863189  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:29:29.184247  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:29:29.825246  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:29:30.363814  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/no-preload-208281/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:29:31.106202  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:29:33.666944  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:29:38.787286  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:29:38.873654  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/old-k8s-version-002036/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:29:40.606747  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/no-preload-208281/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:29:49.028038  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-289681 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (54.506847114s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (54.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-289681 "pgrep -a kubelet"
I1219 03:29:53.316330  257493 config.go:182] Loaded profile config "auto-289681": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-289681 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-pcwzk" [1983e5af-3fcc-4490-b6a6-bd462bec5064] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-pcwzk" [1983e5af-3fcc-4490-b6a6-bd462bec5064] Running
E1219 03:30:01.087954  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/no-preload-208281/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004065538s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-289681 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-289681 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-289681 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-cpzsg" [86e188bf-77d8-450c-8c92-4f044b9966c0] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:353: "calico-node-cpzsg" [86e188bf-77d8-450c-8c92-4f044b9966c0] Running
E1219 03:30:09.508740  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/default-k8s-diff-port-103644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003621052s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-289681 "pgrep -a kubelet"
I1219 03:30:10.308686  257493 config.go:182] Loaded profile config "calico-289681": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-289681 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-w49p6" [41ba1d1f-b572-4677-8ee7-fe77d87fff0d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-w49p6" [41ba1d1f-b572-4677-8ee7-fe77d87fff0d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.003903255s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-017890 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-017890 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-017890 -n newest-cni-017890
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-017890 -n newest-cni-017890: exit status 2 (362.875614ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-017890 -n newest-cni-017890
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-017890 -n newest-cni-017890: exit status 2 (351.191282ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-017890 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-017890 -n newest-cni-017890
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-017890 -n newest-cni-017890
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-289681 "pgrep -a kubelet"
I1219 03:30:16.541763  257493 config.go:182] Loaded profile config "custom-flannel-289681": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-289681 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-hxrw2" [f9169671-15b7-4a30-b030-a8368940205e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-hxrw2" [f9169671-15b7-4a30-b030-a8368940205e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004798653s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-289681 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-289681 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-289681 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (70.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-289681 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-289681 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m10.235625991s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (70.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (54.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-289681 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-289681 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (54.882547135s)
--- PASS: TestNetworkPlugins/group/flannel/Start (54.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-289681 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-289681 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-289681 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (39.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-289681 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-289681 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (39.911524187s)
--- PASS: TestNetworkPlugins/group/bridge/Start (39.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (43.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-289681 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E1219 03:31:02.241015  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-453239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-289681 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (43.561751455s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (43.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-fmgwz" [979951ba-40d8-4fd4-b20f-bb5d6d44fce5] Running
E1219 03:31:19.191269  257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-453239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00418715s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-289681 "pgrep -a kubelet"
I1219 03:31:23.161176  257493 config.go:182] Loaded profile config "bridge-289681": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-289681 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-2r7rx" [d6e37743-6fa8-4b8e-bedf-550873417f64] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-2r7rx" [d6e37743-6fa8-4b8e-bedf-550873417f64] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004127924s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-289681 "pgrep -a kubelet"
I1219 03:31:24.466173  257493 config.go:182] Loaded profile config "flannel-289681": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-289681 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-km246" [dda3e378-3a81-4254-bdb9-2fc6aa8c4ebf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-km246" [dda3e378-3a81-4254-bdb9-2fc6aa8c4ebf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.004024067s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-289681 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-289681 exec deployment/netcat -- nslookup kubernetes.default
I1219 03:31:32.388964  257493 config.go:182] Loaded profile config "enable-default-cni-289681": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-289681 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-tqnng" [c8daa5f6-72ea-4ca3-8163-66a19f0d2072] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-tqnng" [c8daa5f6-72ea-4ca3-8163-66a19f0d2072] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.004904373s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-289681 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-289681 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-289681 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-289681 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-289681 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-z9prr" [b38d71ff-31b3-4d9c-9a54-1469818a8a02] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003640876s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-289681 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-289681 exec deployment/netcat -- nslookup kubernetes.default
I1219 03:31:40.706128  257493 config.go:182] Loaded profile config "kindnet-289681": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-289681 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-tcwl6" [aaabf373-668d-45b0-b68d-6aa02a7e967d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-tcwl6" [aaabf373-668d-45b0-b68d-6aa02a7e967d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004612507s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-289681 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-289681 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-289681 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-289681 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-289681 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    

Test skip (33/420)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.3/cached-images 0
15 TestDownloadOnly/v1.34.3/binaries 0
16 TestDownloadOnly/v1.34.3/kubectl 0
23 TestDownloadOnly/v1.35.0-rc.1/cached-images 0
24 TestDownloadOnly/v1.35.0-rc.1/binaries 0
25 TestDownloadOnly/v1.35.0-rc.1/kubectl 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
60 TestDockerFlags 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
160 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
161 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
162 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv 0
251 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig 0
252 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
253 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS 0
262 TestGvisorAddon 0
284 TestImageBuild 0
285 TestISOImage 0
349 TestChangeNoneUser 0
352 TestScheduledStopWindows 0
354 TestSkaffold 0
371 TestStartStop/group/disable-driver-mounts 0.23
385 TestNetworkPlugins/group/kubenet 5.7
394 TestNetworkPlugins/group/cilium 5.09
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:765: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-443690" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-443690
--- SKIP: TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-289681 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-289681

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-289681

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-289681

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-289681

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-289681

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-289681

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-289681

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-289681

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-289681

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-289681

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-289681"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-289681"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-289681"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-289681

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-289681"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-289681"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-289681" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-289681" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-289681" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-289681" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-289681" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-289681" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-289681" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-289681" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-289681"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-289681"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-289681"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-289681"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-289681"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-289681" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-289681" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-289681" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-289681"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-289681"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-289681"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-289681"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-289681"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22230-253859/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 19 Dec 2025 03:02:58 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: NoKubernetes-821572
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22230-253859/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 19 Dec 2025 02:59:00 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-340572
contexts:
- context:
cluster: NoKubernetes-821572
extensions:
- extension:
last-update: Fri, 19 Dec 2025 03:02:58 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-821572
name: NoKubernetes-821572
- context:
cluster: kubernetes-upgrade-340572
user: kubernetes-upgrade-340572
name: kubernetes-upgrade-340572
current-context: NoKubernetes-821572
kind: Config
users:
- name: NoKubernetes-821572
user:
client-certificate: /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/NoKubernetes-821572/client.crt
client-key: /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/NoKubernetes-821572/client.key
- name: kubernetes-upgrade-340572
user:
client-certificate: /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/kubernetes-upgrade-340572/client.crt
client-key: /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/kubernetes-upgrade-340572/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-289681

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-289681"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-289681"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-289681"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-289681"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-289681"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-289681"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-289681"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-289681"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-289681"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-289681"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-289681"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-289681"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-289681"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-289681"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-289681"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-289681"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-289681"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-289681"

                                                
                                                
----------------------- debugLogs end: kubenet-289681 [took: 5.501635547s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-289681" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-289681
--- SKIP: TestNetworkPlugins/group/kubenet (5.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-289681 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-289681

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-289681

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-289681

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-289681

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-289681

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-289681

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-289681

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-289681

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-289681

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-289681

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289681"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289681"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289681"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-289681

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289681"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289681"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-289681" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-289681" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-289681" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-289681" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-289681" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-289681" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-289681" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-289681" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289681"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289681"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289681"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289681"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289681"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-289681

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-289681

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-289681" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-289681" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-289681

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-289681

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-289681" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-289681" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-289681" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-289681" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-289681" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289681"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289681"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289681"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289681"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289681"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22230-253859/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 19 Dec 2025 02:59:00 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-340572
contexts:
- context:
cluster: kubernetes-upgrade-340572
user: kubernetes-upgrade-340572
name: kubernetes-upgrade-340572
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-340572
user:
client-certificate: /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/kubernetes-upgrade-340572/client.crt
client-key: /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/kubernetes-upgrade-340572/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-289681

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289681"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289681"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289681"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289681"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289681"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289681"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289681"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289681"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289681"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289681"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289681"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289681"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289681"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289681"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289681"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289681"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289681"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-289681" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-289681"

                                                
                                                
----------------------- debugLogs end: cilium-289681 [took: 4.851155406s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-289681" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-289681
--- SKIP: TestNetworkPlugins/group/cilium (5.09s)

                                                
                                    
Copied to clipboard