Test Report: Docker_macOS 14555

                    
                      0a167c9e2958e27b8ab0e3c17b04ac7cefde8636:2022-07-25:25010
                    
                

Test fail (23/289)

x
+
TestDownloadOnly/v1.16.0/preload-exists (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
aaa_download_only_test.go:107: failed to verify preloaded tarball file exists: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4: no such file or directory
--- FAIL: TestDownloadOnly/v1.16.0/preload-exists (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (304.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:897: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20220725122408-44543 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:910: output didn't produce a URL
functional_test.go:902: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20220725122408-44543 --alsologtostderr -v=1] ...
functional_test.go:902: (dbg) [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20220725122408-44543 --alsologtostderr -v=1] stdout:
functional_test.go:902: (dbg) [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20220725122408-44543 --alsologtostderr -v=1] stderr:
I0725 12:27:17.274089   48094 out.go:296] Setting OutFile to fd 1 ...
I0725 12:27:17.274648   48094 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0725 12:27:17.274659   48094 out.go:309] Setting ErrFile to fd 2...
I0725 12:27:17.274666   48094 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0725 12:27:17.274903   48094 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/bin
I0725 12:27:17.275496   48094 mustload.go:65] Loading cluster: functional-20220725122408-44543
I0725 12:27:17.275826   48094 config.go:178] Loaded profile config "functional-20220725122408-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
I0725 12:27:17.276182   48094 cli_runner.go:164] Run: docker container inspect functional-20220725122408-44543 --format={{.State.Status}}
I0725 12:27:17.349758   48094 host.go:66] Checking if "functional-20220725122408-44543" exists ...
I0725 12:27:17.350084   48094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-20220725122408-44543
I0725 12:27:17.425023   48094 api_server.go:165] Checking apiserver status ...
I0725 12:27:17.425111   48094 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:27:17.425167   48094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220725122408-44543
I0725 12:27:17.499665   48094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64363 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/functional-20220725122408-44543/id_rsa Username:docker}
I0725 12:27:17.592478   48094 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/9447/cgroup
W0725 12:27:17.600862   48094 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/9447/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I0725 12:27:17.600883   48094 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:64367/healthz ...
I0725 12:27:17.608260   48094 api_server.go:266] https://127.0.0.1:64367/healthz returned 200:
ok
W0725 12:27:17.608283   48094 out.go:239] * Enabling dashboard ...
* Enabling dashboard ...
I0725 12:27:17.608442   48094 config.go:178] Loaded profile config "functional-20220725122408-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
I0725 12:27:17.608452   48094 addons.go:65] Setting dashboard=true in profile "functional-20220725122408-44543"
I0725 12:27:17.608460   48094 addons.go:153] Setting addon dashboard=true in "functional-20220725122408-44543"
I0725 12:27:17.608478   48094 host.go:66] Checking if "functional-20220725122408-44543" exists ...
I0725 12:27:17.608781   48094 cli_runner.go:164] Run: docker container inspect functional-20220725122408-44543 --format={{.State.Status}}
I0725 12:27:17.701295   48094 out.go:177]   - Using image kubernetesui/metrics-scraper:v1.0.8
I0725 12:27:17.743993   48094 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
I0725 12:27:17.765233   48094 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0725 12:27:17.765282   48094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0725 12:27:17.765418   48094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220725122408-44543
I0725 12:27:17.875379   48094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64363 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/functional-20220725122408-44543/id_rsa Username:docker}
I0725 12:27:17.968459   48094 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0725 12:27:17.968475   48094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0725 12:27:17.982845   48094 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0725 12:27:17.982863   48094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0725 12:27:17.997396   48094 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0725 12:27:17.997410   48094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0725 12:27:18.012162   48094 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0725 12:27:18.012204   48094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4278 bytes)
I0725 12:27:18.025943   48094 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
I0725 12:27:18.025957   48094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0725 12:27:18.040164   48094 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0725 12:27:18.040178   48094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0725 12:27:18.054295   48094 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0725 12:27:18.054309   48094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0725 12:27:18.067394   48094 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0725 12:27:18.067406   48094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0725 12:27:18.082127   48094 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0725 12:27:18.082143   48094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0725 12:27:18.096382   48094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0725 12:27:18.490592   48094 addons.go:116] Writing out "functional-20220725122408-44543" config to set dashboard=true...
W0725 12:27:18.491033   48094 out.go:239] * Verifying dashboard health ...
* Verifying dashboard health ...
I0725 12:27:18.492006   48094 kapi.go:59] client config for functional-20220725122408-44543: &rest.Config{Host:"https://127.0.0.1:64367", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/functional-20220725122408-44543/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/functional-20220725122
408-44543/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22fcfe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0725 12:27:18.504761   48094 service.go:214] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  6f0f7d01-0d8c-40a1-a73f-e790deebdcb8 765 0 2022-07-25 12:27:18 -0700 PDT <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] []  [{kubectl-client-side-apply Update v1 2022-07-25 12:27:18 -0700 PDT FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.100.8.83,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.100.8.83],IPFamilies:[IPv4],AllocateLoadBalancerNo
dePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0725 12:27:18.504927   48094 out.go:239] * Launching proxy ...
* Launching proxy ...
I0725 12:27:18.505035   48094 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-20220725122408-44543 proxy --port 36195]
I0725 12:27:18.507785   48094 dashboard.go:157] Waiting for kubectl to output host:port ...
I0725 12:27:18.540829   48094 dashboard.go:175] proxy stdout: Starting to serve on 127.0. .1:36195
W0725 12:27:18.540881   48094 out.go:239] * Verifying proxy health ...
* Verifying proxy health ...
I0725 12:27:18.540922   48094 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0725 12:27:18.540985   48094 retry.go:31] will retry after 110.466µs: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0725 12:27:18.541157   48094 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0725 12:27:18.541174   48094 retry.go:31] will retry after 216.077µs: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0725 12:27:18.541442   48094 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0725 12:27:18.541477   48094 retry.go:31] will retry after 262.026µs: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0725 12:27:18.541842   48094 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0725 12:27:18.541861   48094 retry.go:31] will retry after 316.478µs: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0725 12:27:18.542259   48094 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0725 12:27:18.542286   48094 retry.go:31] will retry after 468.098µs: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0725 12:27:18.542897   48094 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0725 12:27:18.542948   48094 retry.go:31] will retry after 901.244µs: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0725 12:27:18.543953   48094 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0725 12:27:18.543971   48094 retry.go:31] will retry after 644.295µs: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0725 12:27:18.544667   48094 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0725 12:27:18.544682   48094 retry.go:31] will retry after 1.121724ms: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0725 12:27:18.546124   48094 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0725 12:27:18.546140   48094 retry.go:31] will retry after 1.529966ms: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0725 12:27:18.547744   48094 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0725 12:27:18.547759   48094 retry.go:31] will retry after 3.078972ms: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0725 12:27:18.551060   48094 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0725 12:27:18.551129   48094 retry.go:31] will retry after 5.854223ms: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0725 12:27:18.557039   48094 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0725 12:27:18.557080   48094 retry.go:31] will retry after 11.362655ms: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0725 12:27:18.568832   48094 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0725 12:27:18.568874   48094 retry.go:31] will retry after 9.267303ms: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0725 12:27:18.578466   48094 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0725 12:27:18.578495   48094 retry.go:31] will retry after 17.139291ms: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0725 12:27:18.595881   48094 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0725 12:27:18.595912   48094 retry.go:31] will retry after 23.881489ms: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0725 12:27:18.621199   48094 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0725 12:27:18.621293   48094 retry.go:31] will retry after 42.427055ms: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0725 12:27:18.665809   48094 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0725 12:27:18.665860   48094 retry.go:31] will retry after 51.432832ms: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0725 12:27:18.717389   48094 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0725 12:27:18.717424   48094 retry.go:31] will retry after 78.14118ms: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0725 12:27:18.795786   48094 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0725 12:27:18.795819   48094 retry.go:31] will retry after 174.255803ms: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0725 12:27:18.970194   48094 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0725 12:27:18.970250   48094 retry.go:31] will retry after 159.291408ms: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0725 12:27:19.131108   48094 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0725 12:27:19.131139   48094 retry.go:31] will retry after 233.827468ms: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0725 12:27:19.365247   48094 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0725 12:27:19.365293   48094 retry.go:31] will retry after 429.392365ms: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0725 12:27:19.794786   48094 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0725 12:27:19.794817   48094 retry.go:31] will retry after 801.058534ms: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0725 12:27:20.597039   48094 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0725 12:27:20.597078   48094 retry.go:31] will retry after 1.529087469s: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0725 12:27:22.127803   48094 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0725 12:27:22.127838   48094 retry.go:31] will retry after 1.335136154s: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0725 12:27:23.464131   48094 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0725 12:27:23.464183   48094 retry.go:31] will retry after 2.012724691s: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0725 12:27:25.479156   48094 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0725 12:27:25.479261   48094 retry.go:31] will retry after 4.744335389s: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0725 12:27:30.224396   48094 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0725 12:27:30.224453   48094 retry.go:31] will retry after 4.014454686s: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0725 12:27:34.240451   48094 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0725 12:27:34.240536   48094 retry.go:31] will retry after 11.635741654s: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0725 12:27:45.878712   48094 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0725 12:27:45.878796   48094 retry.go:31] will retry after 15.298130033s: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0725 12:28:01.177437   48094 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0725 12:28:01.177490   48094 retry.go:31] will retry after 19.631844237s: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0725 12:28:20.809936   48094 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0725 12:28:20.810009   48094 retry.go:31] will retry after 15.195386994s: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0725 12:28:36.006438   48094 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0725 12:28:36.006489   48094 retry.go:31] will retry after 28.402880652s: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0725 12:29:04.410519   48094 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0725 12:29:04.410627   48094 retry.go:31] will retry after 1m6.435206373s: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0725 12:30:10.847521   48094 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0725 12:30:10.847591   48094 retry.go:31] will retry after 1m28.514497132s: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0725 12:31:39.364671   48094 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0725 12:31:39.364719   48094 retry.go:31] will retry after 34.767217402s: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0725 12:32:14.133073   48094 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0725 12:32:14.133224   48094 retry.go:31] will retry after 1m5.688515861s: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220725122408-44543
helpers_test.go:235: (dbg) docker inspect functional-20220725122408-44543:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0aeb5ddbe7628b29fdfef00869e32090c506507aba59e461667e3a65f585d44e",
	        "Created": "2022-07-25T19:24:14.974916431Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 20065,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-25T19:24:15.274927505Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/0aeb5ddbe7628b29fdfef00869e32090c506507aba59e461667e3a65f585d44e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0aeb5ddbe7628b29fdfef00869e32090c506507aba59e461667e3a65f585d44e/hostname",
	        "HostsPath": "/var/lib/docker/containers/0aeb5ddbe7628b29fdfef00869e32090c506507aba59e461667e3a65f585d44e/hosts",
	        "LogPath": "/var/lib/docker/containers/0aeb5ddbe7628b29fdfef00869e32090c506507aba59e461667e3a65f585d44e/0aeb5ddbe7628b29fdfef00869e32090c506507aba59e461667e3a65f585d44e-json.log",
	        "Name": "/functional-20220725122408-44543",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-20220725122408-44543:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-20220725122408-44543",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4194304000,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a911d0252d4e2cfb3c8c81b9a13af5a9225119b18170f8e74919a4b98179c589-init/diff:/var/lib/docker/overlay2/d54b85ebfe416686b2558e2807bd5eef62d8c3964c35bbc22b4965093a7c57f4/diff:/var/lib/docker/overlay2/5cb3ff6e7921802d23fd6cba2490df2c36857ad3d97eedcf6e537d1527a37f60/diff:/var/lib/docker/overlay2/e9074b1b2294cc5a00d3237646a71c4d30d89748eadbae26dd532d6a475c9851/diff:/var/lib/docker/overlay2/df8929c4859ecf2c8807040b0174a3f0dfa8da01532ea961cd7f4f2f0ccfbfc2/diff:/var/lib/docker/overlay2/177329168f9a1043ce94a2b7b32958779e8f685e85abf8e4c335d03d23295b26/diff:/var/lib/docker/overlay2/9eef19133d2d5e0111523234481658c1f73d7552db59842ad9ca687debda8fb3/diff:/var/lib/docker/overlay2/5c3a4cdf915d1e65e3003feefb9b9e649b432658e093e04971d5cb79e610dba9/diff:/var/lib/docker/overlay2/d1aeff9fecf983db994270b4acf2a2310c790c64d42c58d436aec3238f88b36c/diff:/var/lib/docker/overlay2/580dc19745fb3c1352983857ac5a555954893045702e7867e651e15a2d6d8732/diff:/var/lib/docker/overlay2/6c77b3
2028ad3ef74c2faf89b5080529c0391907ee9c1d7bbc00c25a5340238b/diff:/var/lib/docker/overlay2/e9e20374d05015c7a4f19eb8e14e663122cd2aef66d761bedfdef138551230b8/diff:/var/lib/docker/overlay2/cd6d64933d189089a7aaf08d7c7db50b89cc981d03b8272e4fdedb65495c3e4c/diff:/var/lib/docker/overlay2/768244a14ae888bb2a08747182035c24bd2a6a7a67f1ddae7b4e60efccb01339/diff:/var/lib/docker/overlay2/f90a95060678580a1a90ee9a563996249f7606bffcd06b680ef15234b56ab4a6/diff:/var/lib/docker/overlay2/49310159b9be5ac9bfa1dca18d816535ceb12e228963adafcf71f9d7d52d95ec/diff:/var/lib/docker/overlay2/cef5c40c08cddf01fe5ea252f4a28586c0bd1296ddc38fc37fc7e34af83c3c30/diff:/var/lib/docker/overlay2/b31bdcb9b1a69c5230400a4e1f5650c1cc8f9e27e673ff43708f9a450106733e/diff:/var/lib/docker/overlay2/899516301b3ffc21aee7cb9984f97d944fe5af0f2cd0bc5a5895bf187e4bff72/diff:/var/lib/docker/overlay2/f1bd0d2fc2ce458c0b6364dde56610c3d26bf8dc2cca2ae4e34a46f173f44595/diff:/var/lib/docker/overlay2/9f0da14f9c19d5a719554996b34f3ef80a0378921181aaf89ca41339ccc5bee3/diff:/var/lib/d
ocker/overlay2/4d6e8c40f7c65cbcfa827b62273eb37d2a0bb0407ee161b80523f11c157a52d4/diff:/var/lib/docker/overlay2/434355bebe182fb5c97b07ad8cf7a59b5474f8bc71379ff4f9690ffe31837e63/diff:/var/lib/docker/overlay2/128fb199261eb4fea9e79111861223ddb0d84438454cb7c0b9a61cc73d0796c3/diff:/var/lib/docker/overlay2/11c9cb7e5f15ac43115cb739da1135a53e08dc94ee1da27441e1d6c79eb73e62/diff:/var/lib/docker/overlay2/5e6bf225209b025da3db5a2d8862c3a0ee64417fa809d026e010644fc6619b48/diff:/var/lib/docker/overlay2/265a2d12f86abb8a607a06ec6f225186dfe622f7e23cbbf146a2ceffc7bc9bdc/diff:/var/lib/docker/overlay2/e9140b8aea381db0faf833cd2d12ff40267febe63f7c6bc2fa27384dbadf89bc/diff:/var/lib/docker/overlay2/07d7d0ab38cb08c95279e0706a65a6a4aeff8b5e72d559f8bc3dfcc0895654d6/diff:/var/lib/docker/overlay2/3a3d8346fb066863283ad39cf7f43615d7a1e99dc12aa7e8892d85d1c550bc63/diff:/var/lib/docker/overlay2/e5317b212815c832ebd2451ddd6e1f6922f350c5c1651ccfef7c5a8f34b7c1e3/diff:/var/lib/docker/overlay2/3bd7d98f0f0bf7ad2e29344a6ab4ca3211616e390da35077e76565c2074
df852/diff:/var/lib/docker/overlay2/ff63d2045cce1bb51b201079bba9d2b2a666b7331699b9407801b2fc00792bb3/diff:/var/lib/docker/overlay2/545bdf3f77f20e3db68875eda0a8829facb605119a630956ab79323688a3562a/diff:/var/lib/docker/overlay2/8f9b40124ec5ada59184358938542e028ca1e234ef1f6bce1816becf6ec8354e/diff:/var/lib/docker/overlay2/71b919cd2ece4207294cb577adab54fdba87cd9f7fdca1a53d6f376f87f11151/diff:/var/lib/docker/overlay2/b2d4a34fe28bd395d9b4ba0120173fc9df90c36a8a60a309ec088eca8930768c/diff:/var/lib/docker/overlay2/e986f180cbc34391c1ea6665283859b4c3c3061156ea1ff7196ffd869741e591/diff:/var/lib/docker/overlay2/cb98df203d42ea558f11fe84300b687cb65e6f3401b377a4466c9c8ef276ec59/diff:/var/lib/docker/overlay2/6371f9c0528fb1fb4a17bfcf3a34877fdd6d43dca1b5f06aff998d43d2010c46/diff:/var/lib/docker/overlay2/79ceef8e89c4094715c05bafb8c1427d09fb4a99f71f1a6186299303af352c98/diff:/var/lib/docker/overlay2/e034e76eeaff4e68d5158ffe54b55d122124ad82998be242fa08bec3010a0e64/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a911d0252d4e2cfb3c8c81b9a13af5a9225119b18170f8e74919a4b98179c589/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a911d0252d4e2cfb3c8c81b9a13af5a9225119b18170f8e74919a4b98179c589/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a911d0252d4e2cfb3c8c81b9a13af5a9225119b18170f8e74919a4b98179c589/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-20220725122408-44543",
	                "Source": "/var/lib/docker/volumes/functional-20220725122408-44543/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-20220725122408-44543",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-20220725122408-44543",
	                "name.minikube.sigs.k8s.io": "functional-20220725122408-44543",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ecdf92dcc2d497f7b34d77c08cae79aea939865a2cdf89ee28dbcd16443d4c51",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "64363"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "64364"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "64365"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "64366"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "64367"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/ecdf92dcc2d4",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-20220725122408-44543": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "0aeb5ddbe762",
	                        "functional-20220725122408-44543"
	                    ],
	                    "NetworkID": "b615ad99053f19257998128e201cc869f2768f8bc5216266fd15ace89365b0af",
	                    "EndpointID": "37e2367f9832d35b6ad6e5fef23caf48a27feb4eb6560b2b83561cd61c5783e4",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20220725122408-44543 -n functional-20220725122408-44543
helpers_test.go:244: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220725122408-44543 logs -n 25: (3.158868457s)
helpers_test.go:252: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|--------------------------------------------------------------------------------------------------------|---------------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                                  Args                                                  |             Profile             |  User   | Version |     Start Time      |      End Time       |
	|----------------|--------------------------------------------------------------------------------------------------------|---------------------------------|---------|---------|---------------------|---------------------|
	| image          | functional-20220725122408-44543                                                                        | functional-20220725122408-44543 | jenkins | v1.26.0 | 25 Jul 22 12:27 PDT | 25 Jul 22 12:27 PDT |
	|                | image ls                                                                                               |                                 |         |         |                     |                     |
	| image          | functional-20220725122408-44543 image load                                                             | functional-20220725122408-44543 | jenkins | v1.26.0 | 25 Jul 22 12:27 PDT | 25 Jul 22 12:27 PDT |
	|                | /Users/jenkins/workspace/addon-resizer-save.tar                                                        |                                 |         |         |                     |                     |
	| image          | functional-20220725122408-44543                                                                        | functional-20220725122408-44543 | jenkins | v1.26.0 | 25 Jul 22 12:27 PDT | 25 Jul 22 12:27 PDT |
	|                | image ls                                                                                               |                                 |         |         |                     |                     |
	| image          | functional-20220725122408-44543 image save --daemon                                                    | functional-20220725122408-44543 | jenkins | v1.26.0 | 25 Jul 22 12:27 PDT | 25 Jul 22 12:27 PDT |
	|                | gcr.io/google-containers/addon-resizer:functional-20220725122408-44543                                 |                                 |         |         |                     |                     |
	| ssh            | functional-20220725122408-44543                                                                        | functional-20220725122408-44543 | jenkins | v1.26.0 | 25 Jul 22 12:27 PDT | 25 Jul 22 12:27 PDT |
	|                | ssh sudo cat                                                                                           |                                 |         |         |                     |                     |
	|                | /etc/test/nested/copy/44543/hosts                                                                      |                                 |         |         |                     |                     |
	| ssh            | functional-20220725122408-44543                                                                        | functional-20220725122408-44543 | jenkins | v1.26.0 | 25 Jul 22 12:27 PDT | 25 Jul 22 12:27 PDT |
	|                | ssh sudo cat                                                                                           |                                 |         |         |                     |                     |
	|                | /etc/ssl/certs/44543.pem                                                                               |                                 |         |         |                     |                     |
	| ssh            | functional-20220725122408-44543                                                                        | functional-20220725122408-44543 | jenkins | v1.26.0 | 25 Jul 22 12:27 PDT | 25 Jul 22 12:27 PDT |
	|                | ssh sudo cat                                                                                           |                                 |         |         |                     |                     |
	|                | /usr/share/ca-certificates/44543.pem                                                                   |                                 |         |         |                     |                     |
	| ssh            | functional-20220725122408-44543                                                                        | functional-20220725122408-44543 | jenkins | v1.26.0 | 25 Jul 22 12:27 PDT | 25 Jul 22 12:27 PDT |
	|                | ssh sudo cat                                                                                           |                                 |         |         |                     |                     |
	|                | /etc/ssl/certs/51391683.0                                                                              |                                 |         |         |                     |                     |
	| ssh            | functional-20220725122408-44543                                                                        | functional-20220725122408-44543 | jenkins | v1.26.0 | 25 Jul 22 12:27 PDT | 25 Jul 22 12:27 PDT |
	|                | ssh sudo cat                                                                                           |                                 |         |         |                     |                     |
	|                | /etc/ssl/certs/445432.pem                                                                              |                                 |         |         |                     |                     |
	| ssh            | functional-20220725122408-44543                                                                        | functional-20220725122408-44543 | jenkins | v1.26.0 | 25 Jul 22 12:27 PDT | 25 Jul 22 12:27 PDT |
	|                | ssh sudo cat                                                                                           |                                 |         |         |                     |                     |
	|                | /usr/share/ca-certificates/445432.pem                                                                  |                                 |         |         |                     |                     |
	| ssh            | functional-20220725122408-44543                                                                        | functional-20220725122408-44543 | jenkins | v1.26.0 | 25 Jul 22 12:27 PDT | 25 Jul 22 12:27 PDT |
	|                | ssh sudo cat                                                                                           |                                 |         |         |                     |                     |
	|                | /etc/ssl/certs/3ec20f2e.0                                                                              |                                 |         |         |                     |                     |
	| cp             | functional-20220725122408-44543                                                                        | functional-20220725122408-44543 | jenkins | v1.26.0 | 25 Jul 22 12:28 PDT | 25 Jul 22 12:28 PDT |
	|                | cp testdata/cp-test.txt                                                                                |                                 |         |         |                     |                     |
	|                | /home/docker/cp-test.txt                                                                               |                                 |         |         |                     |                     |
	| ssh            | functional-20220725122408-44543                                                                        | functional-20220725122408-44543 | jenkins | v1.26.0 | 25 Jul 22 12:28 PDT | 25 Jul 22 12:28 PDT |
	|                | ssh -n                                                                                                 |                                 |         |         |                     |                     |
	|                | functional-20220725122408-44543                                                                        |                                 |         |         |                     |                     |
	|                | sudo cat                                                                                               |                                 |         |         |                     |                     |
	|                | /home/docker/cp-test.txt                                                                               |                                 |         |         |                     |                     |
	| cp             | functional-20220725122408-44543 cp functional-20220725122408-44543:/home/docker/cp-test.txt            | functional-20220725122408-44543 | jenkins | v1.26.0 | 25 Jul 22 12:28 PDT | 25 Jul 22 12:28 PDT |
	|                | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelCpCmd2392516869/001/cp-test.txt |                                 |         |         |                     |                     |
	| ssh            | functional-20220725122408-44543                                                                        | functional-20220725122408-44543 | jenkins | v1.26.0 | 25 Jul 22 12:28 PDT | 25 Jul 22 12:28 PDT |
	|                | ssh -n                                                                                                 |                                 |         |         |                     |                     |
	|                | functional-20220725122408-44543                                                                        |                                 |         |         |                     |                     |
	|                | sudo cat                                                                                               |                                 |         |         |                     |                     |
	|                | /home/docker/cp-test.txt                                                                               |                                 |         |         |                     |                     |
	| image          | functional-20220725122408-44543                                                                        | functional-20220725122408-44543 | jenkins | v1.26.0 | 25 Jul 22 12:28 PDT | 25 Jul 22 12:28 PDT |
	|                | image ls --format short                                                                                |                                 |         |         |                     |                     |
	| image          | functional-20220725122408-44543                                                                        | functional-20220725122408-44543 | jenkins | v1.26.0 | 25 Jul 22 12:28 PDT | 25 Jul 22 12:28 PDT |
	|                | image ls --format yaml                                                                                 |                                 |         |         |                     |                     |
	| ssh            | functional-20220725122408-44543                                                                        | functional-20220725122408-44543 | jenkins | v1.26.0 | 25 Jul 22 12:28 PDT |                     |
	|                | ssh pgrep buildkitd                                                                                    |                                 |         |         |                     |                     |
	| image          | functional-20220725122408-44543 image build -t                                                         | functional-20220725122408-44543 | jenkins | v1.26.0 | 25 Jul 22 12:28 PDT | 25 Jul 22 12:28 PDT |
	|                | localhost/my-image:functional-20220725122408-44543                                                     |                                 |         |         |                     |                     |
	|                | testdata/build                                                                                         |                                 |         |         |                     |                     |
	| image          | functional-20220725122408-44543                                                                        | functional-20220725122408-44543 | jenkins | v1.26.0 | 25 Jul 22 12:28 PDT | 25 Jul 22 12:28 PDT |
	|                | image ls                                                                                               |                                 |         |         |                     |                     |
	| image          | functional-20220725122408-44543                                                                        | functional-20220725122408-44543 | jenkins | v1.26.0 | 25 Jul 22 12:28 PDT | 25 Jul 22 12:28 PDT |
	|                | image ls --format json                                                                                 |                                 |         |         |                     |                     |
	| image          | functional-20220725122408-44543                                                                        | functional-20220725122408-44543 | jenkins | v1.26.0 | 25 Jul 22 12:28 PDT | 25 Jul 22 12:28 PDT |
	|                | image ls --format table                                                                                |                                 |         |         |                     |                     |
	| update-context | functional-20220725122408-44543                                                                        | functional-20220725122408-44543 | jenkins | v1.26.0 | 25 Jul 22 12:28 PDT | 25 Jul 22 12:28 PDT |
	|                | update-context                                                                                         |                                 |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                                                 |                                 |         |         |                     |                     |
	| update-context | functional-20220725122408-44543                                                                        | functional-20220725122408-44543 | jenkins | v1.26.0 | 25 Jul 22 12:28 PDT | 25 Jul 22 12:28 PDT |
	|                | update-context                                                                                         |                                 |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                                                 |                                 |         |         |                     |                     |
	| update-context | functional-20220725122408-44543                                                                        | functional-20220725122408-44543 | jenkins | v1.26.0 | 25 Jul 22 12:28 PDT | 25 Jul 22 12:28 PDT |
	|                | update-context                                                                                         |                                 |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                                                 |                                 |         |         |                     |                     |
	|----------------|--------------------------------------------------------------------------------------------------------|---------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/25 12:27:16
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 12:27:16.150528   48061 out.go:296] Setting OutFile to fd 1 ...
	I0725 12:27:16.150691   48061 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 12:27:16.150696   48061 out.go:309] Setting ErrFile to fd 2...
	I0725 12:27:16.150700   48061 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 12:27:16.150816   48061 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/bin
	I0725 12:27:16.151359   48061 out.go:303] Setting JSON to false
	I0725 12:27:16.170063   48061 start.go:115] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":12408,"bootTime":1658764828,"procs":357,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0725 12:27:16.170191   48061 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0725 12:27:16.207750   48061 out.go:177] * [functional-20220725122408-44543] minikube v1.26.0 on Darwin 12.4
	I0725 12:27:16.320316   48061 out.go:177]   - MINIKUBE_LOCATION=14555
	I0725 12:27:16.362415   48061 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	I0725 12:27:16.404495   48061 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 12:27:16.467515   48061 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 12:27:16.530432   48061 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube
	I0725 12:27:16.572624   48061 config.go:178] Loaded profile config "functional-20220725122408-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0725 12:27:16.572972   48061 driver.go:365] Setting default libvirt URI to qemu:///system
	I0725 12:27:16.731630   48061 docker.go:137] docker version: linux-20.10.17
	I0725 12:27:16.731801   48061 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 12:27:16.887941   48061 info.go:265] docker info: {ID:DEPZ:X5EQ:JUVI:7TAY:IXPP:M3QT:KGJK:NK3N:YSWM:HAHS:YLUF:RBE6 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 19:27:16.806523465 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 12:27:16.930354   48061 out.go:177] * Using the docker driver based on existing profile
	I0725 12:27:16.951336   48061 start.go:284] selected driver: docker
	I0725 12:27:16.951363   48061 start.go:808] validating driver "docker" against &{Name:functional-20220725122408-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:functional-20220725122408-44543 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-pol
icy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 12:27:16.951515   48061 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 12:27:16.951647   48061 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 12:27:17.154775   48061 info.go:265] docker info: {ID:DEPZ:X5EQ:JUVI:7TAY:IXPP:M3QT:KGJK:NK3N:YSWM:HAHS:YLUF:RBE6 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 19:27:17.076935394 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 12:27:17.156818   48061 cni.go:95] Creating CNI manager for ""
	I0725 12:27:17.156837   48061 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 12:27:17.156850   48061 start_flags.go:310] config:
	{Name:functional-20220725122408-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:functional-20220725122408-44543 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:f
alse storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 12:27:17.200234   48061 out.go:177] * dry-run validation complete!
	
	* 
	* ==> Docker <==
	* -- Logs begin at Mon 2022-07-25 19:24:15 UTC, end at Mon 2022-07-25 19:32:18 UTC. --
	Jul 25 19:26:06 functional-20220725122408-44543 dockerd[7232]: time="2022-07-25T19:26:06.087904692Z" level=info msg="Daemon has completed initialization"
	Jul 25 19:26:06 functional-20220725122408-44543 systemd[1]: Started Docker Application Container Engine.
	Jul 25 19:26:06 functional-20220725122408-44543 dockerd[7232]: time="2022-07-25T19:26:06.111274697Z" level=info msg="API listen on [::]:2376"
	Jul 25 19:26:06 functional-20220725122408-44543 dockerd[7232]: time="2022-07-25T19:26:06.113638280Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 25 19:26:06 functional-20220725122408-44543 dockerd[7232]: time="2022-07-25T19:26:06.358067819Z" level=error msg="ed0443b14b1ae18635354f43b843ea808f75d9cb53233c3e81064378e180b6f5 cleanup: failed to delete container from containerd: no such container"
	Jul 25 19:26:06 functional-20220725122408-44543 dockerd[7232]: time="2022-07-25T19:26:06.429602300Z" level=error msg="46db04cf3461096e10e4f0b2e0c1f51621e6e2dba0030f916445d8213d09b222 cleanup: failed to delete container from containerd: no such container"
	Jul 25 19:26:08 functional-20220725122408-44543 dockerd[7232]: time="2022-07-25T19:26:08.476880018Z" level=info msg="ignoring event" container=fb1b0bb500aeec0dc7980428cf7bafb3c8cce3097534e0e9874a6497e10dedf6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 19:26:08 functional-20220725122408-44543 dockerd[7232]: time="2022-07-25T19:26:08.480681884Z" level=info msg="ignoring event" container=4a70b28e8d7374f1d02d176717038cb84264551ce16d89736133a81c99bafbec module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 19:26:08 functional-20220725122408-44543 dockerd[7232]: time="2022-07-25T19:26:08.545742502Z" level=info msg="ignoring event" container=56415bf954be9e4b21fce19a9d612e408f1f9dd254781306f66777cbe630f6cc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 19:26:08 functional-20220725122408-44543 dockerd[7232]: time="2022-07-25T19:26:08.547638185Z" level=info msg="ignoring event" container=108e32e35c63c59dda348f23e3544668990ee4be041f679b8ba903665f05667d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 19:26:08 functional-20220725122408-44543 dockerd[7232]: time="2022-07-25T19:26:08.550920977Z" level=info msg="ignoring event" container=cb83e0c960321783af36f9c2d4ef54d03993a3635c564d1afbda57138cb6bd53 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 19:26:08 functional-20220725122408-44543 dockerd[7232]: time="2022-07-25T19:26:08.554481106Z" level=info msg="ignoring event" container=054c47717fc27de2192a079b18debcd257db3c1362ab99036c546c23fbbc4d8f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 19:26:08 functional-20220725122408-44543 dockerd[7232]: time="2022-07-25T19:26:08.556886669Z" level=info msg="ignoring event" container=7762972ba09979c96c7cb894e74bb470afd5f6d5b5c54ceafc585e00a09a3a82 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 19:26:08 functional-20220725122408-44543 dockerd[7232]: time="2022-07-25T19:26:08.569634126Z" level=info msg="ignoring event" container=dd1f1ec56b3e3eb89abde0e5be64b758f65f98c390e8984e51924d3c8f20f5a5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 19:26:08 functional-20220725122408-44543 dockerd[7232]: time="2022-07-25T19:26:08.569691378Z" level=info msg="ignoring event" container=07c0e6e822963f74124129cc6e0c9b171902bd71b5ae0ceca15ebec7e046311b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 19:26:08 functional-20220725122408-44543 dockerd[7232]: time="2022-07-25T19:26:08.689027060Z" level=info msg="ignoring event" container=378d346bade4e6b9aef00d89097bacf39efc24668f4be4b6d634cdee5b7dc9b3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 19:26:16 functional-20220725122408-44543 dockerd[7232]: time="2022-07-25T19:26:16.407409211Z" level=info msg="ignoring event" container=461af656253efa2c7abff0118895867922dcef4fa7735b85896701ffe9d41fea module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 19:26:56 functional-20220725122408-44543 dockerd[7232]: time="2022-07-25T19:26:56.365561106Z" level=info msg="ignoring event" container=76edb4da0f845ad19c4912d8e2f949a6deabe320b0bfc47a703509a7941c9811 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 19:26:56 functional-20220725122408-44543 dockerd[7232]: time="2022-07-25T19:26:56.412728777Z" level=info msg="ignoring event" container=44233dd7a1bb5ee5c71842974f47a91dd808378658b153a8457df3bfbbc93382 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 19:27:10 functional-20220725122408-44543 dockerd[7232]: time="2022-07-25T19:27:10.840311352Z" level=info msg="ignoring event" container=c902969392e15f0759dc66155788d0a4e0bc8abb42c83534438e418b52394a8f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 19:27:12 functional-20220725122408-44543 dockerd[7232]: time="2022-07-25T19:27:12.232826917Z" level=info msg="ignoring event" container=a9d44c46a76018b66aa19527ead87f6d1fa3d1214a3acab64d3d69c746c7856f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 19:27:19 functional-20220725122408-44543 dockerd[7232]: time="2022-07-25T19:27:19.377140658Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Jul 25 19:27:22 functional-20220725122408-44543 dockerd[7232]: time="2022-07-25T19:27:22.100201548Z" level=warning msg="reference for unknown type: " digest="sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3" remote="docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3"
	Jul 25 19:28:09 functional-20220725122408-44543 dockerd[7232]: time="2022-07-25T19:28:09.657404982Z" level=info msg="ignoring event" container=46c42dee01c186838cd86f8c435a9ea729e6270cea9b0c94e12341b7a9f8ee74 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 19:28:09 functional-20220725122408-44543 dockerd[7232]: time="2022-07-25T19:28:09.771526482Z" level=info msg="Layer sha256:8d988d9cbd4c3812fb85f3c741a359985602af139e727005f4d4471ac42f9d1a cleaned up"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                  CREATED             STATE               NAME                        ATTEMPT             POD ID
	f0b0be6b57588       mysql@sha256:bbe0e2b0a33ef5c3a983e490dcb3c1a42d623db1d5679e82f65cce3f32c8f254                          4 minutes ago       Running             mysql                       0                   675829d3b4fbf
	1b4ea3de25356       kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3         4 minutes ago       Running             kubernetes-dashboard        0                   979ee3d00323f
	bcd5ea2b91734       kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c   4 minutes ago       Running             dashboard-metrics-scraper   0                   11b195f8d24de
	c902969392e15       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e    5 minutes ago       Exited              mount-munger                0                   a9d44c46a7601
	2a2e55d27e67d       82e4c8a736a4f                                                                                          5 minutes ago       Running             echoserver                  0                   4028065ce21f1
	462990e39ed99       nginx@sha256:1761fb5661e4d77e107427d8012ad3a5955007d997e0f4a3d41acc9ff20467c7                          5 minutes ago       Running             myfrontend                  0                   8b114e0f16189
	5e2d805add78c       k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969          5 minutes ago       Running             echoserver                  0                   f0119402236a9
	815a090b6db3f       nginx@sha256:87fb6f4040ffd52dd616f360b8520ed4482930ea75417182ad3f76c4aaadf24f                          5 minutes ago       Running             nginx                       0                   915a2ed23997f
	8805dc8a6f48d       6e38f40d628db                                                                                          6 minutes ago       Running             storage-provisioner         4                   07cfa694b9d58
	a462b93ad3bbe       a4ca41631cc7a                                                                                          6 minutes ago       Running             coredns                     3                   8e1c95a28a65e
	42333172fea22       a634548d10b03                                                                                          6 minutes ago       Running             kube-proxy                  3                   81980696595b4
	e64533b9a6bd5       34cdf99b1bb3b                                                                                          6 minutes ago       Running             kube-controller-manager     4                   1ed28804fa764
	8b784f4651725       d3377ffb7177c                                                                                          6 minutes ago       Running             kube-apiserver              0                   1143a60d6e6a9
	6905cfd756ab4       aebe758cef4cd                                                                                          6 minutes ago       Running             etcd                        4                   9f29af5b3a64e
	acac7f5b7c7eb       5d725196c1f47                                                                                          6 minutes ago       Running             kube-scheduler              4                   86a9874e91cf3
	378d346bade4e       5d725196c1f47                                                                                          6 minutes ago       Exited              kube-scheduler              3                   108e32e35c63c
	4a70b28e8d737       aebe758cef4cd                                                                                          6 minutes ago       Exited              etcd                        3                   56415bf954be9
	07c0e6e822963       34cdf99b1bb3b                                                                                          6 minutes ago       Exited              kube-controller-manager     3                   fb1b0bb500aee
	46db04cf34610       6e38f40d628db                                                                                          6 minutes ago       Created             storage-provisioner         3                   09b9970f55677
	ed0443b14b1ae       a634548d10b03                                                                                          6 minutes ago       Created             kube-proxy                  2                   66b1daa0b21bb
	6f1d8f1b1c02c       a4ca41631cc7a                                                                                          6 minutes ago       Exited              coredns                     2                   973755dbbaaa5
	
	* 
	* ==> coredns [6f1d8f1b1c02] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [a462b93ad3bb] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	
	* 
	* ==> describe nodes <==
	* Name:               functional-20220725122408-44543
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-20220725122408-44543
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a5b59bcfc16aadb787d3d4f0635e06172b98dce6
	                    minikube.k8s.io/name=functional-20220725122408-44543
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_07_25T12_24_34_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 25 Jul 2022 19:24:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-20220725122408-44543
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 25 Jul 2022 19:32:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 25 Jul 2022 19:28:16 +0000   Mon, 25 Jul 2022 19:24:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 25 Jul 2022 19:28:16 +0000   Mon, 25 Jul 2022 19:24:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 25 Jul 2022 19:28:16 +0000   Mon, 25 Jul 2022 19:24:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 25 Jul 2022 19:28:16 +0000   Mon, 25 Jul 2022 19:24:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-20220725122408-44543
	Capacity:
	  cpu:                6
	  ephemeral-storage:  107077304Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  107077304Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 855c6c72c86b4657b3d8c3c774fd7e1d
	  System UUID:                9c53d2e5-b6b3-4666-bda6-7b47028c044e
	  Boot ID:                    f0b8333d-7eba-4eec-b9ed-6046a2426c0d
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.2
	  Kube-Proxy Version:         v1.24.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-54c4b5c49f-9nl97                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m19s
	  default                     hello-node-connect-578cdc45cb-jlnxx                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m29s
	  default                     mysql-67f7d69d8b-d7b9b                                     600m (10%!)(MISSING)    700m (11%!)(MISSING)  512Mi (8%!)(MISSING)       700Mi (11%!)(MISSING)    4m35s
	  default                     nginx-svc                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m41s
	  default                     sp-pod                                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m22s
	  kube-system                 coredns-6d4b75cb6d-cjb8s                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     7m32s
	  kube-system                 etcd-functional-20220725122408-44543                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         7m45s
	  kube-system                 kube-apiserver-functional-20220725122408-44543             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m4s
	  kube-system                 kube-controller-manager-functional-20220725122408-44543    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m45s
	  kube-system                 kube-proxy-95d2d                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m33s
	  kube-system                 kube-scheduler-functional-20220725122408-44543             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m45s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m31s
	  kubernetes-dashboard        dashboard-metrics-scraper-78dbd9dbf5-qw2ph                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	  kubernetes-dashboard        kubernetes-dashboard-5fd5574d9f-fzfvh                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (22%!)(MISSING)  700m (11%!)(MISSING)
	  memory             682Mi (11%!)(MISSING)  870Mi (14%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 6m3s                 kube-proxy       
	  Normal  Starting                 7m3s                 kube-proxy       
	  Normal  Starting                 7m31s                kube-proxy       
	  Normal  Starting                 7m45s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m45s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m45s                kubelet          Node functional-20220725122408-44543 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m45s                kubelet          Node functional-20220725122408-44543 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m45s                kubelet          Node functional-20220725122408-44543 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m35s                kubelet          Node functional-20220725122408-44543 status is now: NodeReady
	  Normal  RegisteredNode           7m33s                node-controller  Node functional-20220725122408-44543 event: Registered Node functional-20220725122408-44543 in Controller
	  Normal  RegisteredNode           6m52s                node-controller  Node functional-20220725122408-44543 event: Registered Node functional-20220725122408-44543 in Controller
	  Normal  NodeAllocatableEnforced  6m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m9s (x8 over 6m9s)  kubelet          Node functional-20220725122408-44543 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m9s (x8 over 6m9s)  kubelet          Node functional-20220725122408-44543 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m9s (x7 over 6m9s)  kubelet          Node functional-20220725122408-44543 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m53s                node-controller  Node functional-20220725122408-44543 event: Registered Node functional-20220725122408-44543 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.001466] FS-Cache: O-key=[8] '17ff1b0300000000'
	[  +0.001104] FS-Cache: N-cookie c=000000004d55404a [p=00000000b8586770 fl=2 nc=0 na=1]
	[  +0.001777] FS-Cache: N-cookie d=0000000042f28ee6 n=00000000d4493a29
	[  +0.001412] FS-Cache: N-key=[8] '17ff1b0300000000'
	[  +0.002998] FS-Cache: Duplicate cookie detected
	[  +0.001046] FS-Cache: O-cookie c=000000009c120975 [p=00000000b8586770 fl=226 nc=0 na=1]
	[  +0.001828] FS-Cache: O-cookie d=0000000042f28ee6 n=00000000da9a003a
	[  +0.001534] FS-Cache: O-key=[8] '17ff1b0300000000'
	[  +0.001185] FS-Cache: N-cookie c=000000004d55404a [p=00000000b8586770 fl=2 nc=0 na=1]
	[  +0.001802] FS-Cache: N-cookie d=0000000042f28ee6 n=00000000e00d4042
	[  +0.001494] FS-Cache: N-key=[8] '17ff1b0300000000'
	[  +3.310474] FS-Cache: Duplicate cookie detected
	[  +0.001024] FS-Cache: O-cookie c=000000006d166cd5 [p=00000000b8586770 fl=226 nc=0 na=1]
	[  +0.001787] FS-Cache: O-cookie d=0000000042f28ee6 n=000000002e66eab5
	[  +0.001771] FS-Cache: O-key=[8] '16ff1b0300000000'
	[  +0.001129] FS-Cache: N-cookie c=000000004eb511b0 [p=00000000b8586770 fl=2 nc=0 na=1]
	[  +0.001806] FS-Cache: N-cookie d=0000000042f28ee6 n=00000000f4062f04
	[  +0.001537] FS-Cache: N-key=[8] '16ff1b0300000000'
	[  +0.463724] FS-Cache: Duplicate cookie detected
	[  +0.001228] FS-Cache: O-cookie c=00000000e0ab6724 [p=00000000b8586770 fl=226 nc=0 na=1]
	[  +0.002297] FS-Cache: O-cookie d=0000000042f28ee6 n=000000004a36e55c
	[  +0.001837] FS-Cache: O-key=[8] '1dff1b0300000000'
	[  +0.001364] FS-Cache: N-cookie c=00000000f2e66760 [p=00000000b8586770 fl=2 nc=0 na=1]
	[  +0.002135] FS-Cache: N-cookie d=0000000042f28ee6 n=0000000079fcd9bf
	[  +0.001675] FS-Cache: N-key=[8] '1dff1b0300000000'
	
	* 
	* ==> etcd [4a70b28e8d73] <==
	* {"level":"info","ts":"2022-07-25T19:26:06.992Z","caller":"embed/etcd.go:479","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-07-25T19:26:06.992Z","caller":"embed/etcd.go:139","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"]}
	{"level":"info","ts":"2022-07-25T19:26:06.992Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.3","git-sha":"0452feec7","go-version":"go1.16.15","go-os":"linux","go-arch":"amd64","max-cpu-set":6,"max-cpu-available":6,"member-initialized":true,"name":"functional-20220725122408-44543","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-s
ize-bytes":2147483648,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2022-07-25T19:26:06.994Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"1.323107ms"}
	{"level":"info","ts":"2022-07-25T19:26:06.995Z","caller":"etcdserver/server.go:529","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2022-07-25T19:26:07.047Z","caller":"etcdserver/raft.go:483","msg":"restarting local member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","commit-index":511}
	{"level":"info","ts":"2022-07-25T19:26:07.048Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=()"}
	{"level":"info","ts":"2022-07-25T19:26:07.048Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became follower at term 4"}
	{"level":"info","ts":"2022-07-25T19:26:07.048Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft aec36adc501070cc [peers: [], term: 4, commit: 511, applied: 0, lastindex: 511, lastterm: 4]"}
	{"level":"warn","ts":"2022-07-25T19:26:07.048Z","caller":"auth/store.go:1220","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2022-07-25T19:26:07.051Z","caller":"mvcc/kvstore.go:415","msg":"kvstore restored","current-rev":484}
	{"level":"info","ts":"2022-07-25T19:26:07.052Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2022-07-25T19:26:07.054Z","caller":"etcdserver/corrupt.go:46","msg":"starting initial corruption check","local-member-id":"aec36adc501070cc","timeout":"7s"}
	{"level":"info","ts":"2022-07-25T19:26:07.054Z","caller":"etcdserver/corrupt.go:116","msg":"initial corruption checking passed; no corruption","local-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2022-07-25T19:26:07.055Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"aec36adc501070cc","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2022-07-25T19:26:07.055Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2022-07-25T19:26:07.055Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2022-07-25T19:26:07.055Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2022-07-25T19:26:07.055Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T19:26:07.055Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T19:26:07.057Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-07-25T19:26:07.057Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-07-25T19:26:07.057Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-07-25T19:26:07.057Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-07-25T19:26:07.057Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	
	* 
	* ==> etcd [6905cfd756ab] <==
	* {"level":"info","ts":"2022-07-25T19:26:11.263Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2022-07-25T19:26:11.263Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2022-07-25T19:26:11.263Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2022-07-25T19:26:11.263Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T19:26:11.263Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T19:26:11.263Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-07-25T19:26:11.263Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-07-25T19:26:11.266Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-07-25T19:26:11.266Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-07-25T19:26:12.255Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 4"}
	{"level":"info","ts":"2022-07-25T19:26:12.255Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 4"}
	{"level":"info","ts":"2022-07-25T19:26:12.255Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2022-07-25T19:26:12.255Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 5"}
	{"level":"info","ts":"2022-07-25T19:26:12.255Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 5"}
	{"level":"info","ts":"2022-07-25T19:26:12.255Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 5"}
	{"level":"info","ts":"2022-07-25T19:26:12.255Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 5"}
	{"level":"info","ts":"2022-07-25T19:26:12.256Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-20220725122408-44543 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-25T19:26:12.256Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-25T19:26:12.256Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-25T19:26:12.256Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-07-25T19:26:12.256Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-25T19:26:12.258Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2022-07-25T19:26:12.258Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2022-07-25T19:27:54.892Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"154.489823ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllerrevisions/\" range_end:\"/registry/controllerrevisions0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2022-07-25T19:27:54.892Z","caller":"traceutil/trace.go:171","msg":"trace[1662158199] range","detail":"{range_begin:/registry/controllerrevisions/; range_end:/registry/controllerrevisions0; response_count:0; response_revision:837; }","duration":"154.713943ms","start":"2022-07-25T19:27:54.738Z","end":"2022-07-25T19:27:54.892Z","steps":["trace[1662158199] 'count revisions from in-memory index tree'  (duration: 154.395929ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  19:32:19 up 13 min,  0 users,  load average: 0.20, 0.57, 0.48
	Linux functional-20220725122408-44543 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [8b784f465172] <==
	* I0725 19:26:14.452618       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0725 19:26:14.452775       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0725 19:26:14.452803       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0725 19:26:14.453447       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0725 19:26:14.453735       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0725 19:26:14.452791       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0725 19:26:14.457219       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0725 19:26:15.121353       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0725 19:26:15.348162       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0725 19:26:15.884764       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0725 19:26:15.890087       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0725 19:26:15.914129       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0725 19:26:15.923814       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0725 19:26:15.927932       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0725 19:26:16.030453       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0725 19:26:34.000459       1 controller.go:611] quota admission added evaluator for: endpoints
	I0725 19:26:38.346355       1 alloc.go:327] "allocated clusterIPs" service="default/nginx-svc" clusterIPs=map[IPv4:10.104.199.95]
	I0725 19:26:38.359584       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0725 19:26:49.940548       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0725 19:26:50.018405       1 alloc.go:327] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs=map[IPv4:10.100.197.52]
	I0725 19:27:00.114766       1 alloc.go:327] "allocated clusterIPs" service="default/hello-node" clusterIPs=map[IPv4:10.110.133.209]
	I0725 19:27:18.261688       1 controller.go:611] quota admission added evaluator for: namespaces
	I0725 19:27:18.438989       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.100.8.83]
	I0725 19:27:18.450670       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.111.103.243]
	I0725 19:27:44.125213       1 alloc.go:327] "allocated clusterIPs" service="default/mysql" clusterIPs=map[IPv4:10.110.46.173]
	
	* 
	* ==> kube-controller-manager [07c0e6e82296] <==
	* I0725 19:26:07.781029       1 serving.go:348] Generated self-signed cert in-memory
	I0725 19:26:08.282080       1 controllermanager.go:180] Version: v1.24.2
	I0725 19:26:08.282117       1 controllermanager.go:182] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 19:26:08.283118       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0725 19:26:08.283156       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0725 19:26:08.283288       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0725 19:26:08.283392       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	* 
	* ==> kube-controller-manager [e64533b9a6bd] <==
	* I0725 19:26:26.999441       1 shared_informer.go:262] Caches are synced for daemon sets
	I0725 19:26:27.057017       1 shared_informer.go:262] Caches are synced for resource quota
	I0725 19:26:27.093154       1 shared_informer.go:262] Caches are synced for resource quota
	I0725 19:26:27.470432       1 shared_informer.go:262] Caches are synced for garbage collector
	I0725 19:26:27.470500       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0725 19:26:27.473957       1 shared_informer.go:262] Caches are synced for garbage collector
	I0725 19:26:43.985999       1 event.go:294] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0725 19:26:49.942716       1 event.go:294] "Event occurred" object="default/hello-node-connect" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-connect-578cdc45cb to 1"
	I0725 19:26:50.004993       1 event.go:294] "Event occurred" object="default/hello-node-connect-578cdc45cb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-connect-578cdc45cb-jlnxx"
	I0725 19:27:00.069162       1 event.go:294] "Event occurred" object="default/hello-node" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-54c4b5c49f to 1"
	I0725 19:27:00.072431       1 event.go:294] "Event occurred" object="default/hello-node-54c4b5c49f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-54c4b5c49f-9nl97"
	I0725 19:27:18.293810       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-78dbd9dbf5 to 1"
	I0725 19:27:18.307273       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-5fd5574d9f to 1"
	I0725 19:27:18.307343       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-78dbd9dbf5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-78dbd9dbf5-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0725 19:27:18.311011       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-78dbd9dbf5" failed with pods "dashboard-metrics-scraper-78dbd9dbf5-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0725 19:27:18.311237       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0725 19:27:18.316693       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0725 19:27:18.318735       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-78dbd9dbf5" failed with pods "dashboard-metrics-scraper-78dbd9dbf5-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0725 19:27:18.318954       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-78dbd9dbf5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-78dbd9dbf5-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0725 19:27:18.322260       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0725 19:27:18.322387       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0725 19:27:18.328647       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-78dbd9dbf5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-78dbd9dbf5-qw2ph"
	I0725 19:27:18.332923       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5fd5574d9f-fzfvh"
	I0725 19:27:44.142649       1 event.go:294] "Event occurred" object="default/mysql" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set mysql-67f7d69d8b to 1"
	I0725 19:27:44.149068       1 event.go:294] "Event occurred" object="default/mysql-67f7d69d8b" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mysql-67f7d69d8b-d7b9b"
	
	* 
	* ==> kube-proxy [42333172fea2] <==
	* I0725 19:26:16.008067       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0725 19:26:16.008165       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0725 19:26:16.008248       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0725 19:26:16.026301       1 server_others.go:206] "Using iptables Proxier"
	I0725 19:26:16.026385       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0725 19:26:16.026407       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0725 19:26:16.026424       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0725 19:26:16.026483       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0725 19:26:16.027407       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0725 19:26:16.027831       1 server.go:661] "Version info" version="v1.24.2"
	I0725 19:26:16.027862       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 19:26:16.028760       1 config.go:226] "Starting endpoint slice config controller"
	I0725 19:26:16.028838       1 config.go:317] "Starting service config controller"
	I0725 19:26:16.028847       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0725 19:26:16.028783       1 config.go:444] "Starting node config controller"
	I0725 19:26:16.028882       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0725 19:26:16.028932       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0725 19:26:16.129303       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0725 19:26:16.129709       1 shared_informer.go:262] Caches are synced for node config
	I0725 19:26:16.129803       1 shared_informer.go:262] Caches are synced for service config
	
	* 
	* ==> kube-proxy [ed0443b14b1a] <==
	* 
	* 
	* ==> kube-scheduler [378d346bade4] <==
	* I0725 19:26:07.771309       1 serving.go:348] Generated self-signed cert in-memory
	W0725 19:26:08.650626       1 authentication.go:346] Error looking up in-cluster authentication configuration: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.49.2:8441: connect: connection refused
	W0725 19:26:08.650666       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0725 19:26:08.650672       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0725 19:26:08.653289       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.2"
	I0725 19:26:08.653304       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 19:26:08.654606       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0725 19:26:08.655494       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0725 19:26:08.655524       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0725 19:26:08.655591       1 shared_informer.go:258] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0725 19:26:08.655599       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0725 19:26:08.655600       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0725 19:26:08.655614       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0725 19:26:08.655648       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	E0725 19:26:08.656926       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [acac7f5b7c7e] <==
	* I0725 19:26:11.891171       1 serving.go:348] Generated self-signed cert in-memory
	W0725 19:26:14.367351       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0725 19:26:14.367390       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0725 19:26:14.367398       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0725 19:26:14.367403       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0725 19:26:14.458498       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.2"
	I0725 19:26:14.458539       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 19:26:14.462128       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0725 19:26:14.463023       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0725 19:26:14.463031       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0725 19:26:14.463044       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0725 19:26:14.563506       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2022-07-25 19:24:15 UTC, end at Mon 2022-07-25 19:32:20 UTC. --
	Jul 25 19:26:58 functional-20220725122408-44543 kubelet[8988]: I0725 19:26:58.336665    8988 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=d9fa7817-96de-4fb5-af70-1bd6ae8aa897 path="/var/lib/kubelet/pods/d9fa7817-96de-4fb5-af70-1bd6ae8aa897/volumes"
	Jul 25 19:27:00 functional-20220725122408-44543 kubelet[8988]: I0725 19:27:00.075786    8988 topology_manager.go:200] "Topology Admit Handler"
	Jul 25 19:27:00 functional-20220725122408-44543 kubelet[8988]: I0725 19:27:00.142410    8988 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bngdp\" (UniqueName: \"kubernetes.io/projected/fc45db03-9a8a-4733-80bb-7323d6d80c01-kube-api-access-bngdp\") pod \"hello-node-54c4b5c49f-9nl97\" (UID: \"fc45db03-9a8a-4733-80bb-7323d6d80c01\") " pod="default/hello-node-54c4b5c49f-9nl97"
	Jul 25 19:27:08 functional-20220725122408-44543 kubelet[8988]: I0725 19:27:08.617932    8988 topology_manager.go:200] "Topology Admit Handler"
	Jul 25 19:27:08 functional-20220725122408-44543 kubelet[8988]: I0725 19:27:08.719326    8988 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/f9f1e489-c871-4b87-bf48-bcc0f068b174-test-volume\") pod \"busybox-mount\" (UID: \"f9f1e489-c871-4b87-bf48-bcc0f068b174\") " pod="default/busybox-mount"
	Jul 25 19:27:08 functional-20220725122408-44543 kubelet[8988]: I0725 19:27:08.719380    8988 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n79c8\" (UniqueName: \"kubernetes.io/projected/f9f1e489-c871-4b87-bf48-bcc0f068b174-kube-api-access-n79c8\") pod \"busybox-mount\" (UID: \"f9f1e489-c871-4b87-bf48-bcc0f068b174\") " pod="default/busybox-mount"
	Jul 25 19:27:09 functional-20220725122408-44543 kubelet[8988]: I0725 19:27:09.167254    8988 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="a9d44c46a76018b66aa19527ead87f6d1fa3d1214a3acab64d3d69c746c7856f"
	Jul 25 19:27:12 functional-20220725122408-44543 kubelet[8988]: I0725 19:27:12.450071    8988 reconciler.go:192] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/f9f1e489-c871-4b87-bf48-bcc0f068b174-test-volume\") pod \"f9f1e489-c871-4b87-bf48-bcc0f068b174\" (UID: \"f9f1e489-c871-4b87-bf48-bcc0f068b174\") "
	Jul 25 19:27:12 functional-20220725122408-44543 kubelet[8988]: I0725 19:27:12.450242    8988 reconciler.go:192] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n79c8\" (UniqueName: \"kubernetes.io/projected/f9f1e489-c871-4b87-bf48-bcc0f068b174-kube-api-access-n79c8\") pod \"f9f1e489-c871-4b87-bf48-bcc0f068b174\" (UID: \"f9f1e489-c871-4b87-bf48-bcc0f068b174\") "
	Jul 25 19:27:12 functional-20220725122408-44543 kubelet[8988]: I0725 19:27:12.450280    8988 operation_generator.go:856] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9f1e489-c871-4b87-bf48-bcc0f068b174-test-volume" (OuterVolumeSpecName: "test-volume") pod "f9f1e489-c871-4b87-bf48-bcc0f068b174" (UID: "f9f1e489-c871-4b87-bf48-bcc0f068b174"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jul 25 19:27:12 functional-20220725122408-44543 kubelet[8988]: I0725 19:27:12.452903    8988 operation_generator.go:856] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9f1e489-c871-4b87-bf48-bcc0f068b174-kube-api-access-n79c8" (OuterVolumeSpecName: "kube-api-access-n79c8") pod "f9f1e489-c871-4b87-bf48-bcc0f068b174" (UID: "f9f1e489-c871-4b87-bf48-bcc0f068b174"). InnerVolumeSpecName "kube-api-access-n79c8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 25 19:27:12 functional-20220725122408-44543 kubelet[8988]: I0725 19:27:12.550920    8988 reconciler.go:312] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/f9f1e489-c871-4b87-bf48-bcc0f068b174-test-volume\") on node \"functional-20220725122408-44543\" DevicePath \"\""
	Jul 25 19:27:12 functional-20220725122408-44543 kubelet[8988]: I0725 19:27:12.550965    8988 reconciler.go:312] "Volume detached for volume \"kube-api-access-n79c8\" (UniqueName: \"kubernetes.io/projected/f9f1e489-c871-4b87-bf48-bcc0f068b174-kube-api-access-n79c8\") on node \"functional-20220725122408-44543\" DevicePath \"\""
	Jul 25 19:27:13 functional-20220725122408-44543 kubelet[8988]: I0725 19:27:13.215293    8988 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="a9d44c46a76018b66aa19527ead87f6d1fa3d1214a3acab64d3d69c746c7856f"
	Jul 25 19:27:18 functional-20220725122408-44543 kubelet[8988]: I0725 19:27:18.406652    8988 topology_manager.go:200] "Topology Admit Handler"
	Jul 25 19:27:18 functional-20220725122408-44543 kubelet[8988]: E0725 19:27:18.406698    8988 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="f9f1e489-c871-4b87-bf48-bcc0f068b174" containerName="mount-munger"
	Jul 25 19:27:18 functional-20220725122408-44543 kubelet[8988]: I0725 19:27:18.406722    8988 memory_manager.go:345] "RemoveStaleState removing state" podUID="f9f1e489-c871-4b87-bf48-bcc0f068b174" containerName="mount-munger"
	Jul 25 19:27:18 functional-20220725122408-44543 kubelet[8988]: I0725 19:27:18.406802    8988 topology_manager.go:200] "Topology Admit Handler"
	Jul 25 19:27:18 functional-20220725122408-44543 kubelet[8988]: I0725 19:27:18.604074    8988 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzmkt\" (UniqueName: \"kubernetes.io/projected/b7606102-ee0f-49d4-bd92-87912113ec32-kube-api-access-tzmkt\") pod \"kubernetes-dashboard-5fd5574d9f-fzfvh\" (UID: \"b7606102-ee0f-49d4-bd92-87912113ec32\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-fzfvh"
	Jul 25 19:27:18 functional-20220725122408-44543 kubelet[8988]: I0725 19:27:18.604305    8988 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2pc9\" (UniqueName: \"kubernetes.io/projected/b7b6a2e8-88d0-470b-b34e-857743b995ab-kube-api-access-p2pc9\") pod \"dashboard-metrics-scraper-78dbd9dbf5-qw2ph\" (UID: \"b7b6a2e8-88d0-470b-b34e-857743b995ab\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-78dbd9dbf5-qw2ph"
	Jul 25 19:27:18 functional-20220725122408-44543 kubelet[8988]: I0725 19:27:18.604433    8988 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b7b6a2e8-88d0-470b-b34e-857743b995ab-tmp-volume\") pod \"dashboard-metrics-scraper-78dbd9dbf5-qw2ph\" (UID: \"b7b6a2e8-88d0-470b-b34e-857743b995ab\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-78dbd9dbf5-qw2ph"
	Jul 25 19:27:18 functional-20220725122408-44543 kubelet[8988]: I0725 19:27:18.604533    8988 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b7606102-ee0f-49d4-bd92-87912113ec32-tmp-volume\") pod \"kubernetes-dashboard-5fd5574d9f-fzfvh\" (UID: \"b7606102-ee0f-49d4-bd92-87912113ec32\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-fzfvh"
	Jul 25 19:27:44 functional-20220725122408-44543 kubelet[8988]: I0725 19:27:44.149163    8988 topology_manager.go:200] "Topology Admit Handler"
	Jul 25 19:27:44 functional-20220725122408-44543 kubelet[8988]: I0725 19:27:44.312718    8988 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47m49\" (UniqueName: \"kubernetes.io/projected/f9a08f1f-9e8c-4f26-ad75-3a9f795b6aa3-kube-api-access-47m49\") pod \"mysql-67f7d69d8b-d7b9b\" (UID: \"f9a08f1f-9e8c-4f26-ad75-3a9f795b6aa3\") " pod="default/mysql-67f7d69d8b-d7b9b"
	Jul 25 19:31:10 functional-20220725122408-44543 kubelet[8988]: W0725 19:31:10.082472    8988 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	
	* 
	* ==> kubernetes-dashboard [1b4ea3de2535] <==
	* 2022/07/25 19:27:28 Starting overwatch
	2022/07/25 19:27:28 Using namespace: kubernetes-dashboard
	2022/07/25 19:27:28 Using in-cluster config to connect to apiserver
	2022/07/25 19:27:28 Using secret token for csrf signing
	2022/07/25 19:27:28 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/07/25 19:27:28 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/07/25 19:27:28 Successful initial request to the apiserver, version: v1.24.2
	2022/07/25 19:27:28 Generating JWE encryption key
	2022/07/25 19:27:28 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/07/25 19:27:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/07/25 19:27:28 Initializing JWE encryption key from synchronized object
	2022/07/25 19:27:28 Creating in-cluster Sidecar client
	2022/07/25 19:27:28 Serving insecurely on HTTP port: 9090
	2022/07/25 19:27:28 Successful request to sidecar
	
	* 
	* ==> storage-provisioner [46db04cf3461] <==
	* 
	* 
	* ==> storage-provisioner [8805dc8a6f48] <==
	* I0725 19:26:16.586893       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0725 19:26:16.595915       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0725 19:26:16.595963       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0725 19:26:34.001710       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0725 19:26:34.001829       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-20220725122408-44543_0508bfcd-c194-4851-81d7-dc4b1f47fdec!
	I0725 19:26:34.002261       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c6f243a6-2320-4107-94db-4a43fe663957", APIVersion:"v1", ResourceVersion:"568", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-20220725122408-44543_0508bfcd-c194-4851-81d7-dc4b1f47fdec became leader
	I0725 19:26:34.102905       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-20220725122408-44543_0508bfcd-c194-4851-81d7-dc4b1f47fdec!
	I0725 19:26:43.985362       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0725 19:26:43.985415       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    e3fc1cb1-f308-4acb-92a4-f46928edbd9d 352 0 2022-07-25 19:24:48 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2022-07-25 19:24:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-19b0dc84-fdd5-4387-9027-98ee681a6ac4 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  19b0dc84-fdd5-4387-9027-98ee681a6ac4 596 0 2022-07-25 19:26:43 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2022-07-25 19:26:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2022-07-25 19:26:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0725 19:26:43.985885       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-19b0dc84-fdd5-4387-9027-98ee681a6ac4" provisioned
	I0725 19:26:43.985915       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0725 19:26:43.985920       1 volume_store.go:212] Trying to save persistentvolume "pvc-19b0dc84-fdd5-4387-9027-98ee681a6ac4"
	I0725 19:26:43.986195       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"19b0dc84-fdd5-4387-9027-98ee681a6ac4", APIVersion:"v1", ResourceVersion:"596", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0725 19:26:43.992625       1 volume_store.go:219] persistentvolume "pvc-19b0dc84-fdd5-4387-9027-98ee681a6ac4" saved
	I0725 19:26:43.993122       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"19b0dc84-fdd5-4387-9027-98ee681a6ac4", APIVersion:"v1", ResourceVersion:"596", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-19b0dc84-fdd5-4387-9027-98ee681a6ac4
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p functional-20220725122408-44543 -n functional-20220725122408-44543
helpers_test.go:261: (dbg) Run:  kubectl --context functional-20220725122408-44543 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox-mount
helpers_test.go:272: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context functional-20220725122408-44543 describe pod busybox-mount
helpers_test.go:280: (dbg) kubectl --context functional-20220725122408-44543 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:         busybox-mount
	Namespace:    default
	Priority:     0
	Node:         functional-20220725122408-44543/192.168.49.2
	Start Time:   Mon, 25 Jul 2022 12:27:08 -0700
	Labels:       integration-test=busybox-mount
	Annotations:  <none>
	Status:       Succeeded
	IP:           172.17.0.7
	IPs:
	  IP:  172.17.0.7
	Containers:
	  mount-munger:
	    Container ID:  docker://c902969392e15f0759dc66155788d0a4e0bc8abb42c83534438e418b52394a8f
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 25 Jul 2022 12:27:10 -0700
	      Finished:     Mon, 25 Jul 2022 12:27:10 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n79c8 (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-n79c8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m12s  default-scheduler  Successfully assigned default/busybox-mount to functional-20220725122408-44543
	  Normal  Pulling    5m12s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m11s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.512002397s
	  Normal  Created    5m11s  kubelet            Created container mount-munger
	  Normal  Started    5m11s  kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:283: <<< TestFunctional/parallel/DashboardCmd FAILED: end of post-mortem logs <<<
helpers_test.go:284: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/DashboardCmd (304.34s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (253.48s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-20220725123225-44543 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E0725 12:36:38.156418   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/functional-20220725122408-44543/client.crt: no such file or directory
E0725 12:36:38.161681   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/functional-20220725122408-44543/client.crt: no such file or directory
E0725 12:36:38.173885   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/functional-20220725122408-44543/client.crt: no such file or directory
E0725 12:36:38.195979   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/functional-20220725122408-44543/client.crt: no such file or directory
E0725 12:36:38.237130   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/functional-20220725122408-44543/client.crt: no such file or directory
E0725 12:36:38.319334   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/functional-20220725122408-44543/client.crt: no such file or directory
E0725 12:36:38.481558   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/functional-20220725122408-44543/client.crt: no such file or directory
E0725 12:36:38.802127   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/functional-20220725122408-44543/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-20220725123225-44543 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m13.453107241s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-20220725123225-44543] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14555
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node ingress-addon-legacy-20220725123225-44543 in cluster ingress-addon-legacy-20220725123225-44543
	* Pulling base image ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 20.10.17 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 12:32:25.447412   48880 out.go:296] Setting OutFile to fd 1 ...
	I0725 12:32:25.447533   48880 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 12:32:25.447538   48880 out.go:309] Setting ErrFile to fd 2...
	I0725 12:32:25.447542   48880 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 12:32:25.447664   48880 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/bin
	I0725 12:32:25.448205   48880 out.go:303] Setting JSON to false
	I0725 12:32:25.464169   48880 start.go:115] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":12717,"bootTime":1658764828,"procs":364,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0725 12:32:25.464242   48880 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0725 12:32:25.485460   48880 out.go:177] * [ingress-addon-legacy-20220725123225-44543] minikube v1.26.0 on Darwin 12.4
	I0725 12:32:25.527702   48880 out.go:177]   - MINIKUBE_LOCATION=14555
	I0725 12:32:25.527658   48880 notify.go:193] Checking for updates...
	I0725 12:32:25.549374   48880 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	I0725 12:32:25.570392   48880 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 12:32:25.596708   48880 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 12:32:25.618687   48880 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube
	I0725 12:32:25.640716   48880 driver.go:365] Setting default libvirt URI to qemu:///system
	I0725 12:32:25.710083   48880 docker.go:137] docker version: linux-20.10.17
	I0725 12:32:25.710212   48880 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 12:32:25.841514   48880 info.go:265] docker info: {ID:DEPZ:X5EQ:JUVI:7TAY:IXPP:M3QT:KGJK:NK3N:YSWM:HAHS:YLUF:RBE6 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:false NGoroutines:46 SystemTime:2022-07-25 19:32:25.786971632 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 12:32:25.863661   48880 out.go:177] * Using the docker driver based on user configuration
	I0725 12:32:25.885599   48880 start.go:284] selected driver: docker
	I0725 12:32:25.885659   48880 start.go:808] validating driver "docker" against <nil>
	I0725 12:32:25.885686   48880 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 12:32:25.888930   48880 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 12:32:26.021677   48880 info.go:265] docker info: {ID:DEPZ:X5EQ:JUVI:7TAY:IXPP:M3QT:KGJK:NK3N:YSWM:HAHS:YLUF:RBE6 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:false NGoroutines:46 SystemTime:2022-07-25 19:32:25.966025656 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 12:32:26.021799   48880 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0725 12:32:26.021978   48880 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 12:32:26.043723   48880 out.go:177] * Using Docker Desktop driver with root privileges
	I0725 12:32:26.065710   48880 cni.go:95] Creating CNI manager for ""
	I0725 12:32:26.065747   48880 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 12:32:26.065772   48880 start_flags.go:310] config:
	{Name:ingress-addon-legacy-20220725123225-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20220725123225-44543 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 12:32:26.087648   48880 out.go:177] * Starting control plane node ingress-addon-legacy-20220725123225-44543 in cluster ingress-addon-legacy-20220725123225-44543
	I0725 12:32:26.131738   48880 cache.go:120] Beginning downloading kic base image for docker with docker
	I0725 12:32:26.153664   48880 out.go:177] * Pulling base image ...
	I0725 12:32:26.195722   48880 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0725 12:32:26.195776   48880 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0725 12:32:26.259798   48880 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0725 12:32:26.259821   48880 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0725 12:32:26.262275   48880 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0725 12:32:26.262286   48880 cache.go:57] Caching tarball of preloaded images
	I0725 12:32:26.262475   48880 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0725 12:32:26.305954   48880 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0725 12:32:26.328174   48880 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0725 12:32:26.423337   48880 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0725 12:32:31.000962   48880 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0725 12:32:31.001118   48880 preload.go:256] verifying checksumm of /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0725 12:32:31.619069   48880 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0725 12:32:31.619290   48880 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/ingress-addon-legacy-20220725123225-44543/config.json ...
	I0725 12:32:31.619312   48880 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/ingress-addon-legacy-20220725123225-44543/config.json: {Name:mk1253bdcdf66cf09d15b7dc0a5a2809d200f457 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 12:32:31.619623   48880 cache.go:208] Successfully downloaded all kic artifacts
	I0725 12:32:31.619650   48880 start.go:370] acquiring machines lock for ingress-addon-legacy-20220725123225-44543: {Name:mke99b535306793ecd3013e42c438573b2e04f28 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 12:32:31.619792   48880 start.go:374] acquired machines lock for "ingress-addon-legacy-20220725123225-44543" in 134.347µs
	I0725 12:32:31.619814   48880 start.go:92] Provisioning new machine with config: &{Name:ingress-addon-legacy-20220725123225-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20220725
123225-44543 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 12:32:31.619856   48880 start.go:132] createHost starting for "" (driver="docker")
	I0725 12:32:31.664683   48880 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0725 12:32:31.665040   48880 start.go:166] libmachine.API.Create for "ingress-addon-legacy-20220725123225-44543" (driver="docker")
	I0725 12:32:31.665081   48880 client.go:168] LocalClient.Create starting
	I0725 12:32:31.665246   48880 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem
	I0725 12:32:31.665310   48880 main.go:134] libmachine: Decoding PEM data...
	I0725 12:32:31.665335   48880 main.go:134] libmachine: Parsing certificate...
	I0725 12:32:31.665413   48880 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem
	I0725 12:32:31.665464   48880 main.go:134] libmachine: Decoding PEM data...
	I0725 12:32:31.665479   48880 main.go:134] libmachine: Parsing certificate...
	I0725 12:32:31.666294   48880 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-20220725123225-44543 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0725 12:32:31.747047   48880 cli_runner.go:211] docker network inspect ingress-addon-legacy-20220725123225-44543 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0725 12:32:31.747177   48880 network_create.go:272] running [docker network inspect ingress-addon-legacy-20220725123225-44543] to gather additional debugging logs...
	I0725 12:32:31.747205   48880 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-20220725123225-44543
	W0725 12:32:31.809204   48880 cli_runner.go:211] docker network inspect ingress-addon-legacy-20220725123225-44543 returned with exit code 1
	I0725 12:32:31.809233   48880 network_create.go:275] error running [docker network inspect ingress-addon-legacy-20220725123225-44543]: docker network inspect ingress-addon-legacy-20220725123225-44543: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: ingress-addon-legacy-20220725123225-44543
	I0725 12:32:31.809263   48880 network_create.go:277] output of [docker network inspect ingress-addon-legacy-20220725123225-44543]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: ingress-addon-legacy-20220725123225-44543
	
	** /stderr **
	I0725 12:32:31.809360   48880 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0725 12:32:31.903379   48880 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00065a078] misses:0}
	I0725 12:32:31.903414   48880 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0725 12:32:31.903432   48880 network_create.go:115] attempt to create docker network ingress-addon-legacy-20220725123225-44543 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0725 12:32:31.903501   48880 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-20220725123225-44543 ingress-addon-legacy-20220725123225-44543
	I0725 12:32:31.997701   48880 network_create.go:99] docker network ingress-addon-legacy-20220725123225-44543 192.168.49.0/24 created
	I0725 12:32:31.997823   48880 kic.go:106] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-20220725123225-44543" container
	I0725 12:32:31.997930   48880 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0725 12:32:32.060403   48880 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-20220725123225-44543 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220725123225-44543 --label created_by.minikube.sigs.k8s.io=true
	I0725 12:32:32.122939   48880 oci.go:103] Successfully created a docker volume ingress-addon-legacy-20220725123225-44543
	I0725 12:32:32.123154   48880 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-20220725123225-44543-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220725123225-44543 --entrypoint /usr/bin/test -v ingress-addon-legacy-20220725123225-44543:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 -d /var/lib
	I0725 12:32:32.560654   48880 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-20220725123225-44543
	I0725 12:32:32.560812   48880 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0725 12:32:32.560827   48880 kic.go:179] Starting extracting preloaded images to volume ...
	I0725 12:32:32.560923   48880 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-20220725123225-44543:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 -I lz4 -xf /preloaded.tar -C /extractDir
	I0725 12:32:36.760729   48880 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-20220725123225-44543:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 -I lz4 -xf /preloaded.tar -C /extractDir: (4.199507067s)
	I0725 12:32:36.760753   48880 kic.go:188] duration metric: took 4.199830 seconds to extract preloaded images to volume
	I0725 12:32:36.760867   48880 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0725 12:32:36.893646   48880 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-20220725123225-44543 --name ingress-addon-legacy-20220725123225-44543 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220725123225-44543 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-20220725123225-44543 --network ingress-addon-legacy-20220725123225-44543 --ip 192.168.49.2 --volume ingress-addon-legacy-20220725123225-44543:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842
	I0725 12:32:37.241330   48880 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220725123225-44543 --format={{.State.Running}}
	I0725 12:32:37.310880   48880 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220725123225-44543 --format={{.State.Status}}
	I0725 12:32:37.383231   48880 cli_runner.go:164] Run: docker exec ingress-addon-legacy-20220725123225-44543 stat /var/lib/dpkg/alternatives/iptables
	I0725 12:32:37.512994   48880 oci.go:144] the created container "ingress-addon-legacy-20220725123225-44543" has a running status.
	I0725 12:32:37.513030   48880 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/ingress-addon-legacy-20220725123225-44543/id_rsa...
	I0725 12:32:37.668772   48880 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/ingress-addon-legacy-20220725123225-44543/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0725 12:32:37.668834   48880 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/ingress-addon-legacy-20220725123225-44543/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0725 12:32:37.780004   48880 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220725123225-44543 --format={{.State.Status}}
	I0725 12:32:37.846777   48880 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0725 12:32:37.846797   48880 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-20220725123225-44543 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0725 12:32:37.965193   48880 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220725123225-44543 --format={{.State.Status}}
	I0725 12:32:38.031930   48880 machine.go:88] provisioning docker machine ...
	I0725 12:32:38.032084   48880 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-20220725123225-44543"
	I0725 12:32:38.032171   48880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220725123225-44543
	I0725 12:32:38.099474   48880 main.go:134] libmachine: Using SSH client type: native
	I0725 12:32:38.099649   48880 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 49222 <nil> <nil>}
	I0725 12:32:38.099663   48880 main.go:134] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-20220725123225-44543 && echo "ingress-addon-legacy-20220725123225-44543" | sudo tee /etc/hostname
	I0725 12:32:38.231290   48880 main.go:134] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-20220725123225-44543
	
	I0725 12:32:38.231368   48880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220725123225-44543
	I0725 12:32:38.298716   48880 main.go:134] libmachine: Using SSH client type: native
	I0725 12:32:38.298886   48880 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 49222 <nil> <nil>}
	I0725 12:32:38.298902   48880 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-20220725123225-44543' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-20220725123225-44543/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-20220725123225-44543' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 12:32:38.418975   48880 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 12:32:38.418993   48880 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube}
	I0725 12:32:38.419010   48880 ubuntu.go:177] setting up certificates
	I0725 12:32:38.419025   48880 provision.go:83] configureAuth start
	I0725 12:32:38.419087   48880 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-20220725123225-44543
	I0725 12:32:38.486597   48880 provision.go:138] copyHostCerts
	I0725 12:32:38.486634   48880 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem
	I0725 12:32:38.486685   48880 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem, removing ...
	I0725 12:32:38.486694   48880 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem
	I0725 12:32:38.486798   48880 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem (1082 bytes)
	I0725 12:32:38.486955   48880 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem
	I0725 12:32:38.486984   48880 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem, removing ...
	I0725 12:32:38.486988   48880 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem
	I0725 12:32:38.487045   48880 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem (1123 bytes)
	I0725 12:32:38.487175   48880 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem
	I0725 12:32:38.487204   48880 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem, removing ...
	I0725 12:32:38.487210   48880 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem
	I0725 12:32:38.487264   48880 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem (1675 bytes)
	I0725 12:32:38.487380   48880 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-20220725123225-44543 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-20220725123225-44543]
	I0725 12:32:38.569165   48880 provision.go:172] copyRemoteCerts
	I0725 12:32:38.569264   48880 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 12:32:38.569307   48880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220725123225-44543
	I0725 12:32:38.638044   48880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49222 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/ingress-addon-legacy-20220725123225-44543/id_rsa Username:docker}
	I0725 12:32:38.726884   48880 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0725 12:32:38.726965   48880 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0725 12:32:38.742931   48880 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0725 12:32:38.742998   48880 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem --> /etc/docker/server.pem (1294 bytes)
	I0725 12:32:38.759632   48880 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0725 12:32:38.759710   48880 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 12:32:38.776307   48880 provision.go:86] duration metric: configureAuth took 357.262576ms
	I0725 12:32:38.776320   48880 ubuntu.go:193] setting minikube options for container-runtime
	I0725 12:32:38.776446   48880 config.go:178] Loaded profile config "ingress-addon-legacy-20220725123225-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0725 12:32:38.776498   48880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220725123225-44543
	I0725 12:32:38.844550   48880 main.go:134] libmachine: Using SSH client type: native
	I0725 12:32:38.844697   48880 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 49222 <nil> <nil>}
	I0725 12:32:38.844713   48880 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 12:32:38.963573   48880 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0725 12:32:38.963584   48880 ubuntu.go:71] root file system type: overlay
	I0725 12:32:38.963729   48880 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 12:32:38.963792   48880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220725123225-44543
	I0725 12:32:39.032080   48880 main.go:134] libmachine: Using SSH client type: native
	I0725 12:32:39.032250   48880 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 49222 <nil> <nil>}
	I0725 12:32:39.032310   48880 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 12:32:39.162540   48880 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 12:32:39.162635   48880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220725123225-44543
	I0725 12:32:39.230526   48880 main.go:134] libmachine: Using SSH client type: native
	I0725 12:32:39.230708   48880 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 49222 <nil> <nil>}
	I0725 12:32:39.230722   48880 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 12:32:39.804713   48880 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-06-06 23:01:03.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-07-25 19:32:39.167605795 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0725 12:32:39.804735   48880 machine.go:91] provisioned docker machine in 1.772642339s
	I0725 12:32:39.804744   48880 client.go:171] LocalClient.Create took 8.139470325s
	I0725 12:32:39.804759   48880 start.go:174] duration metric: libmachine.API.Create for "ingress-addon-legacy-20220725123225-44543" took 8.139536186s
	I0725 12:32:39.804769   48880 start.go:307] post-start starting for "ingress-addon-legacy-20220725123225-44543" (driver="docker")
	I0725 12:32:39.804773   48880 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 12:32:39.804852   48880 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 12:32:39.804899   48880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220725123225-44543
	I0725 12:32:39.872351   48880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49222 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/ingress-addon-legacy-20220725123225-44543/id_rsa Username:docker}
	I0725 12:32:39.962535   48880 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 12:32:39.965816   48880 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0725 12:32:39.965830   48880 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0725 12:32:39.965836   48880 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0725 12:32:39.965843   48880 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0725 12:32:39.965851   48880 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/addons for local assets ...
	I0725 12:32:39.965947   48880 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files for local assets ...
	I0725 12:32:39.966087   48880 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem -> 445432.pem in /etc/ssl/certs
	I0725 12:32:39.966092   48880 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem -> /etc/ssl/certs/445432.pem
	I0725 12:32:39.966239   48880 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 12:32:39.973021   48880 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem --> /etc/ssl/certs/445432.pem (1708 bytes)
	I0725 12:32:39.990137   48880 start.go:310] post-start completed in 185.356691ms
	I0725 12:32:39.990693   48880 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-20220725123225-44543
	I0725 12:32:40.057945   48880 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/ingress-addon-legacy-20220725123225-44543/config.json ...
	I0725 12:32:40.058349   48880 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 12:32:40.058413   48880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220725123225-44543
	I0725 12:32:40.125745   48880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49222 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/ingress-addon-legacy-20220725123225-44543/id_rsa Username:docker}
	I0725 12:32:40.209764   48880 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0725 12:32:40.214038   48880 start.go:135] duration metric: createHost completed in 8.593978886s
	I0725 12:32:40.214052   48880 start.go:82] releasing machines lock for "ingress-addon-legacy-20220725123225-44543", held for 8.594056055s
	I0725 12:32:40.214129   48880 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-20220725123225-44543
	I0725 12:32:40.283203   48880 ssh_runner.go:195] Run: systemctl --version
	I0725 12:32:40.283207   48880 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0725 12:32:40.283292   48880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220725123225-44543
	I0725 12:32:40.283314   48880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220725123225-44543
	I0725 12:32:40.355442   48880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49222 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/ingress-addon-legacy-20220725123225-44543/id_rsa Username:docker}
	I0725 12:32:40.356514   48880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49222 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/ingress-addon-legacy-20220725123225-44543/id_rsa Username:docker}
	I0725 12:32:40.445964   48880 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 12:32:40.576001   48880 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0725 12:32:40.576068   48880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 12:32:40.585396   48880 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 12:32:40.597508   48880 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 12:32:40.666112   48880 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 12:32:40.729013   48880 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 12:32:40.793682   48880 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 12:32:40.992273   48880 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 12:32:41.027887   48880 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 12:32:41.106018   48880 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 20.10.17 ...
	I0725 12:32:41.106217   48880 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-20220725123225-44543 dig +short host.docker.internal
	I0725 12:32:41.238425   48880 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0725 12:32:41.238671   48880 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0725 12:32:41.243221   48880 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 12:32:41.252592   48880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-20220725123225-44543
	I0725 12:32:41.319948   48880 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0725 12:32:41.320017   48880 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 12:32:41.347765   48880 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0725 12:32:41.347781   48880 docker.go:542] Images already preloaded, skipping extraction
	I0725 12:32:41.347856   48880 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 12:32:41.377352   48880 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0725 12:32:41.377372   48880 cache_images.go:84] Images are preloaded, skipping loading
	I0725 12:32:41.377443   48880 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0725 12:32:41.450760   48880 cni.go:95] Creating CNI manager for ""
	I0725 12:32:41.450772   48880 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 12:32:41.450783   48880 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0725 12:32:41.450796   48880 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-20220725123225-44543 NodeName:ingress-addon-legacy-20220725123225-44543 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:sy
stemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0725 12:32:41.450923   48880 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-20220725123225-44543"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 12:32:41.451002   48880 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-20220725123225-44543 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20220725123225-44543 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0725 12:32:41.451067   48880 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0725 12:32:41.458383   48880 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 12:32:41.458447   48880 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 12:32:41.465196   48880 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I0725 12:32:41.477758   48880 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0725 12:32:41.490097   48880 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2084 bytes)
	I0725 12:32:41.502132   48880 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0725 12:32:41.505644   48880 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 12:32:41.514748   48880 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/ingress-addon-legacy-20220725123225-44543 for IP: 192.168.49.2
	I0725 12:32:41.514866   48880 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.key
	I0725 12:32:41.514914   48880 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.key
	I0725 12:32:41.514961   48880 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/ingress-addon-legacy-20220725123225-44543/client.key
	I0725 12:32:41.514972   48880 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/ingress-addon-legacy-20220725123225-44543/client.crt with IP's: []
	I0725 12:32:41.605329   48880 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/ingress-addon-legacy-20220725123225-44543/client.crt ...
	I0725 12:32:41.605339   48880 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/ingress-addon-legacy-20220725123225-44543/client.crt: {Name:mk35cf2131af545db12bb6bd626ccf0b4e812382 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 12:32:41.605658   48880 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/ingress-addon-legacy-20220725123225-44543/client.key ...
	I0725 12:32:41.605666   48880 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/ingress-addon-legacy-20220725123225-44543/client.key: {Name:mk5f5a88b280eadbc89d34ba6c66d0f471f4be8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 12:32:41.605860   48880 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/ingress-addon-legacy-20220725123225-44543/apiserver.key.dd3b5fb2
	I0725 12:32:41.605876   48880 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/ingress-addon-legacy-20220725123225-44543/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0725 12:32:41.674455   48880 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/ingress-addon-legacy-20220725123225-44543/apiserver.crt.dd3b5fb2 ...
	I0725 12:32:41.674464   48880 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/ingress-addon-legacy-20220725123225-44543/apiserver.crt.dd3b5fb2: {Name:mkb11848347f5362f60d98aa319c1e4f72453409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 12:32:41.674729   48880 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/ingress-addon-legacy-20220725123225-44543/apiserver.key.dd3b5fb2 ...
	I0725 12:32:41.674737   48880 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/ingress-addon-legacy-20220725123225-44543/apiserver.key.dd3b5fb2: {Name:mkd059718027c06e06270b3cecd936a3ea75847e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 12:32:41.674922   48880 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/ingress-addon-legacy-20220725123225-44543/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/ingress-addon-legacy-20220725123225-44543/apiserver.crt
	I0725 12:32:41.675069   48880 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/ingress-addon-legacy-20220725123225-44543/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/ingress-addon-legacy-20220725123225-44543/apiserver.key
	I0725 12:32:41.675217   48880 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/ingress-addon-legacy-20220725123225-44543/proxy-client.key
	I0725 12:32:41.675234   48880 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/ingress-addon-legacy-20220725123225-44543/proxy-client.crt with IP's: []
	I0725 12:32:41.776977   48880 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/ingress-addon-legacy-20220725123225-44543/proxy-client.crt ...
	I0725 12:32:41.776985   48880 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/ingress-addon-legacy-20220725123225-44543/proxy-client.crt: {Name:mka527eb6306755adad6631644b93395b546cf04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 12:32:41.777173   48880 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/ingress-addon-legacy-20220725123225-44543/proxy-client.key ...
	I0725 12:32:41.777186   48880 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/ingress-addon-legacy-20220725123225-44543/proxy-client.key: {Name:mk8a2bb280f86f95fa8753e5c6eecb2e545c9dfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 12:32:41.777373   48880 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/ingress-addon-legacy-20220725123225-44543/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0725 12:32:41.777400   48880 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/ingress-addon-legacy-20220725123225-44543/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0725 12:32:41.777417   48880 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/ingress-addon-legacy-20220725123225-44543/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0725 12:32:41.777433   48880 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/ingress-addon-legacy-20220725123225-44543/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0725 12:32:41.777450   48880 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0725 12:32:41.777468   48880 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0725 12:32:41.777481   48880 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0725 12:32:41.777497   48880 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0725 12:32:41.777602   48880 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543.pem (1338 bytes)
	W0725 12:32:41.777638   48880 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543_empty.pem, impossibly tiny 0 bytes
	I0725 12:32:41.777646   48880 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 12:32:41.777673   48880 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem (1082 bytes)
	I0725 12:32:41.777703   48880 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem (1123 bytes)
	I0725 12:32:41.777734   48880 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem (1675 bytes)
	I0725 12:32:41.777804   48880 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem (1708 bytes)
	I0725 12:32:41.777833   48880 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543.pem -> /usr/share/ca-certificates/44543.pem
	I0725 12:32:41.777851   48880 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem -> /usr/share/ca-certificates/445432.pem
	I0725 12:32:41.777866   48880 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0725 12:32:41.778312   48880 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/ingress-addon-legacy-20220725123225-44543/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0725 12:32:41.796282   48880 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/ingress-addon-legacy-20220725123225-44543/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0725 12:32:41.812631   48880 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/ingress-addon-legacy-20220725123225-44543/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 12:32:41.829528   48880 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/ingress-addon-legacy-20220725123225-44543/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0725 12:32:41.846452   48880 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 12:32:41.862967   48880 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 12:32:41.879616   48880 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 12:32:41.895925   48880 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0725 12:32:41.912457   48880 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543.pem --> /usr/share/ca-certificates/44543.pem (1338 bytes)
	I0725 12:32:41.928547   48880 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem --> /usr/share/ca-certificates/445432.pem (1708 bytes)
	I0725 12:32:41.945098   48880 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 12:32:41.961544   48880 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 12:32:41.973699   48880 ssh_runner.go:195] Run: openssl version
	I0725 12:32:41.978700   48880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44543.pem && ln -fs /usr/share/ca-certificates/44543.pem /etc/ssl/certs/44543.pem"
	I0725 12:32:41.986485   48880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44543.pem
	I0725 12:32:41.990041   48880 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 25 19:24 /usr/share/ca-certificates/44543.pem
	I0725 12:32:41.990104   48880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44543.pem
	I0725 12:32:41.995160   48880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/44543.pem /etc/ssl/certs/51391683.0"
	I0725 12:32:42.002642   48880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/445432.pem && ln -fs /usr/share/ca-certificates/445432.pem /etc/ssl/certs/445432.pem"
	I0725 12:32:42.010097   48880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/445432.pem
	I0725 12:32:42.013807   48880 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 25 19:24 /usr/share/ca-certificates/445432.pem
	I0725 12:32:42.013858   48880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/445432.pem
	I0725 12:32:42.018971   48880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/445432.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 12:32:42.026317   48880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 12:32:42.033585   48880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 12:32:42.037401   48880 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 25 19:19 /usr/share/ca-certificates/minikubeCA.pem
	I0725 12:32:42.037449   48880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 12:32:42.042749   48880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 12:32:42.050156   48880 kubeadm.go:395] StartCluster: {Name:ingress-addon-legacy-20220725123225-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20220725123225-44543 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 12:32:42.050258   48880 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 12:32:42.077466   48880 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 12:32:42.084942   48880 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 12:32:42.091832   48880 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0725 12:32:42.091883   48880 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 12:32:42.098971   48880 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 12:32:42.098992   48880 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0725 12:32:42.795779   48880 out.go:204]   - Generating certificates and keys ...
	I0725 12:32:44.771894   48880 out.go:204]   - Booting up control plane ...
	W0725 12:34:39.694393   48880 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-20220725123225-44543 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-20220725123225-44543 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0725 19:32:42.152562     952 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0725 19:32:44.765642     952 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0725 19:32:44.766675     952 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-20220725123225-44543 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-20220725123225-44543 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0725 19:32:42.152562     952 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0725 19:32:44.765642     952 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0725 19:32:44.766675     952 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0725 12:34:39.694430   48880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0725 12:34:40.109995   48880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 12:34:40.119147   48880 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0725 12:34:40.119198   48880 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 12:34:40.126109   48880 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 12:34:40.126130   48880 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0725 12:34:40.793918   48880 out.go:204]   - Generating certificates and keys ...
	I0725 12:34:41.291104   48880 out.go:204]   - Booting up control plane ...
	I0725 12:36:36.215183   48880 kubeadm.go:397] StartCluster complete in 3m54.159675412s
	I0725 12:36:36.215265   48880 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 12:36:36.244055   48880 logs.go:274] 0 containers: []
	W0725 12:36:36.244067   48880 logs.go:276] No container was found matching "kube-apiserver"
	I0725 12:36:36.244126   48880 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 12:36:36.275076   48880 logs.go:274] 0 containers: []
	W0725 12:36:36.275089   48880 logs.go:276] No container was found matching "etcd"
	I0725 12:36:36.275151   48880 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 12:36:36.303065   48880 logs.go:274] 0 containers: []
	W0725 12:36:36.303077   48880 logs.go:276] No container was found matching "coredns"
	I0725 12:36:36.303132   48880 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 12:36:36.331383   48880 logs.go:274] 0 containers: []
	W0725 12:36:36.331396   48880 logs.go:276] No container was found matching "kube-scheduler"
	I0725 12:36:36.331452   48880 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 12:36:36.359661   48880 logs.go:274] 0 containers: []
	W0725 12:36:36.359674   48880 logs.go:276] No container was found matching "kube-proxy"
	I0725 12:36:36.359735   48880 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 12:36:36.387695   48880 logs.go:274] 0 containers: []
	W0725 12:36:36.387706   48880 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 12:36:36.387772   48880 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 12:36:36.417026   48880 logs.go:274] 0 containers: []
	W0725 12:36:36.417041   48880 logs.go:276] No container was found matching "storage-provisioner"
	I0725 12:36:36.417108   48880 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 12:36:36.446827   48880 logs.go:274] 0 containers: []
	W0725 12:36:36.446843   48880 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 12:36:36.446852   48880 logs.go:123] Gathering logs for kubelet ...
	I0725 12:36:36.446859   48880 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 12:36:36.489755   48880 logs.go:123] Gathering logs for dmesg ...
	I0725 12:36:36.489773   48880 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 12:36:36.503315   48880 logs.go:123] Gathering logs for describe nodes ...
	I0725 12:36:36.503332   48880 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 12:36:36.554753   48880 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 12:36:36.554765   48880 logs.go:123] Gathering logs for Docker ...
	I0725 12:36:36.554774   48880 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 12:36:36.570031   48880 logs.go:123] Gathering logs for container status ...
	I0725 12:36:36.570047   48880 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 12:36:38.626881   48880 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056773947s)
	W0725 12:36:38.627021   48880 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0725 19:34:40.178847    3428 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0725 19:34:41.283503    3428 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0725 19:34:41.284810    3428 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0725 12:36:38.627037   48880 out.go:239] * 
	* 
	W0725 12:36:38.627158   48880 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0725 19:34:40.178847    3428 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0725 19:34:41.283503    3428 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0725 19:34:41.284810    3428 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0725 19:34:40.178847    3428 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0725 19:34:41.283503    3428 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0725 19:34:41.284810    3428 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0725 12:36:38.627174   48880 out.go:239] * 
	* 
	W0725 12:36:38.627736   48880 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 12:36:38.691287   48880 out.go:177] 
	W0725 12:36:38.733490   48880 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0725 19:34:40.178847    3428 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0725 19:34:41.283503    3428 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0725 19:34:41.284810    3428 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0725 19:34:40.178847    3428 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0725 19:34:41.283503    3428 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0725 19:34:41.284810    3428 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0725 12:36:38.733618   48880 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0725 12:36:38.733669   48880 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0725 12:36:38.791616   48880 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-20220725123225-44543 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (253.48s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (89.58s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-20220725123225-44543 addons enable ingress --alsologtostderr -v=5
E0725 12:36:39.442474   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/functional-20220725122408-44543/client.crt: no such file or directory
E0725 12:36:40.724966   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/functional-20220725122408-44543/client.crt: no such file or directory
E0725 12:36:43.285974   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/functional-20220725122408-44543/client.crt: no such file or directory
E0725 12:36:44.427999   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/addons-20220725121918-44543/client.crt: no such file or directory
E0725 12:36:48.406278   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/functional-20220725122408-44543/client.crt: no such file or directory
E0725 12:36:58.647669   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/functional-20220725122408-44543/client.crt: no such file or directory
E0725 12:37:19.128475   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/functional-20220725122408-44543/client.crt: no such file or directory
E0725 12:38:00.091678   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/functional-20220725122408-44543/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-20220725123225-44543 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m29.075185227s)

                                                
                                                
-- stdout --
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 12:36:38.938265   49210 out.go:296] Setting OutFile to fd 1 ...
	I0725 12:36:38.938432   49210 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 12:36:38.938437   49210 out.go:309] Setting ErrFile to fd 2...
	I0725 12:36:38.938441   49210 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 12:36:38.938542   49210 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/bin
	I0725 12:36:38.939179   49210 config.go:178] Loaded profile config "ingress-addon-legacy-20220725123225-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0725 12:36:38.939192   49210 addons.go:65] Setting ingress=true in profile "ingress-addon-legacy-20220725123225-44543"
	I0725 12:36:38.939199   49210 addons.go:153] Setting addon ingress=true in "ingress-addon-legacy-20220725123225-44543"
	I0725 12:36:38.939444   49210 host.go:66] Checking if "ingress-addon-legacy-20220725123225-44543" exists ...
	I0725 12:36:38.939941   49210 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220725123225-44543 --format={{.State.Status}}
	I0725 12:36:39.028567   49210 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0725 12:36:39.050189   49210 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0725 12:36:39.071357   49210 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0725 12:36:39.093215   49210 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	I0725 12:36:39.115431   49210 addons.go:345] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0725 12:36:39.115475   49210 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15118 bytes)
	I0725 12:36:39.115608   49210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220725123225-44543
	I0725 12:36:39.184302   49210 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49222 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/ingress-addon-legacy-20220725123225-44543/id_rsa Username:docker}
	I0725 12:36:39.278375   49210 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0725 12:36:39.327067   49210 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:36:39.327093   49210 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:36:39.603494   49210 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0725 12:36:39.656702   49210 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:36:39.656722   49210 retry.go:31] will retry after 540.190908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:36:40.197112   49210 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0725 12:36:40.248578   49210 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:36:40.248593   49210 retry.go:31] will retry after 655.06503ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:36:40.904059   49210 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0725 12:36:40.956803   49210 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:36:40.956818   49210 retry.go:31] will retry after 791.196345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:36:41.748472   49210 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0725 12:36:41.799597   49210 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:36:41.799623   49210 retry.go:31] will retry after 1.170244332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:36:42.970659   49210 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0725 12:36:43.024738   49210 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:36:43.024751   49210 retry.go:31] will retry after 2.253109428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:36:45.279365   49210 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0725 12:36:45.331758   49210 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:36:45.331772   49210 retry.go:31] will retry after 1.610739793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:36:46.943932   49210 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0725 12:36:46.995167   49210 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:36:46.995180   49210 retry.go:31] will retry after 2.804311738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:36:49.801850   49210 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0725 12:36:49.854000   49210 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:36:49.854015   49210 retry.go:31] will retry after 3.824918958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:36:53.679365   49210 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0725 12:36:53.729470   49210 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:36:53.729489   49210 retry.go:31] will retry after 7.69743562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:37:01.427562   49210 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0725 12:37:01.477801   49210 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:37:01.477816   49210 retry.go:31] will retry after 14.635568968s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:37:16.115671   49210 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0725 12:37:16.169299   49210 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:37:16.169321   49210 retry.go:31] will retry after 28.406662371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:37:44.577732   49210 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0725 12:37:44.630153   49210 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:37:44.630168   49210 retry.go:31] will retry after 23.168280436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:38:07.801323   49210 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0725 12:38:07.852292   49210 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:38:07.852319   49210 addons.go:383] Verifying addon ingress=true in "ingress-addon-legacy-20220725123225-44543"
	I0725 12:38:07.873782   49210 out.go:177] * Verifying ingress addon...
	I0725 12:38:07.895538   49210 out.go:177] 
	W0725 12:38:07.916860   49210 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-20220725123225-44543" does not exist: client config: context "ingress-addon-legacy-20220725123225-44543" does not exist]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-20220725123225-44543" does not exist: client config: context "ingress-addon-legacy-20220725123225-44543" does not exist]
	W0725 12:38:07.916877   49210 out.go:239] * 
	* 
	W0725 12:38:07.920316   49210 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 12:38:07.941574   49210 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-20220725123225-44543
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-20220725123225-44543:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e31d988cf37501e00a0fe669941dd6c6e0aa90b9125c05d4a99ba61c2751f942",
	        "Created": "2022-07-25T19:32:36.963637983Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 40855,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-25T19:32:37.237228576Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/e31d988cf37501e00a0fe669941dd6c6e0aa90b9125c05d4a99ba61c2751f942/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e31d988cf37501e00a0fe669941dd6c6e0aa90b9125c05d4a99ba61c2751f942/hostname",
	        "HostsPath": "/var/lib/docker/containers/e31d988cf37501e00a0fe669941dd6c6e0aa90b9125c05d4a99ba61c2751f942/hosts",
	        "LogPath": "/var/lib/docker/containers/e31d988cf37501e00a0fe669941dd6c6e0aa90b9125c05d4a99ba61c2751f942/e31d988cf37501e00a0fe669941dd6c6e0aa90b9125c05d4a99ba61c2751f942-json.log",
	        "Name": "/ingress-addon-legacy-20220725123225-44543",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-20220725123225-44543:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-20220725123225-44543",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/54abd23bae110c24f8c0c1064cacc0151a8a996347cb4f1c0df870dd6a302ca2-init/diff:/var/lib/docker/overlay2/d54b85ebfe416686b2558e2807bd5eef62d8c3964c35bbc22b4965093a7c57f4/diff:/var/lib/docker/overlay2/5cb3ff6e7921802d23fd6cba2490df2c36857ad3d97eedcf6e537d1527a37f60/diff:/var/lib/docker/overlay2/e9074b1b2294cc5a00d3237646a71c4d30d89748eadbae26dd532d6a475c9851/diff:/var/lib/docker/overlay2/df8929c4859ecf2c8807040b0174a3f0dfa8da01532ea961cd7f4f2f0ccfbfc2/diff:/var/lib/docker/overlay2/177329168f9a1043ce94a2b7b32958779e8f685e85abf8e4c335d03d23295b26/diff:/var/lib/docker/overlay2/9eef19133d2d5e0111523234481658c1f73d7552db59842ad9ca687debda8fb3/diff:/var/lib/docker/overlay2/5c3a4cdf915d1e65e3003feefb9b9e649b432658e093e04971d5cb79e610dba9/diff:/var/lib/docker/overlay2/d1aeff9fecf983db994270b4acf2a2310c790c64d42c58d436aec3238f88b36c/diff:/var/lib/docker/overlay2/580dc19745fb3c1352983857ac5a555954893045702e7867e651e15a2d6d8732/diff:/var/lib/docker/overlay2/6c77b3
2028ad3ef74c2faf89b5080529c0391907ee9c1d7bbc00c25a5340238b/diff:/var/lib/docker/overlay2/e9e20374d05015c7a4f19eb8e14e663122cd2aef66d761bedfdef138551230b8/diff:/var/lib/docker/overlay2/cd6d64933d189089a7aaf08d7c7db50b89cc981d03b8272e4fdedb65495c3e4c/diff:/var/lib/docker/overlay2/768244a14ae888bb2a08747182035c24bd2a6a7a67f1ddae7b4e60efccb01339/diff:/var/lib/docker/overlay2/f90a95060678580a1a90ee9a563996249f7606bffcd06b680ef15234b56ab4a6/diff:/var/lib/docker/overlay2/49310159b9be5ac9bfa1dca18d816535ceb12e228963adafcf71f9d7d52d95ec/diff:/var/lib/docker/overlay2/cef5c40c08cddf01fe5ea252f4a28586c0bd1296ddc38fc37fc7e34af83c3c30/diff:/var/lib/docker/overlay2/b31bdcb9b1a69c5230400a4e1f5650c1cc8f9e27e673ff43708f9a450106733e/diff:/var/lib/docker/overlay2/899516301b3ffc21aee7cb9984f97d944fe5af0f2cd0bc5a5895bf187e4bff72/diff:/var/lib/docker/overlay2/f1bd0d2fc2ce458c0b6364dde56610c3d26bf8dc2cca2ae4e34a46f173f44595/diff:/var/lib/docker/overlay2/9f0da14f9c19d5a719554996b34f3ef80a0378921181aaf89ca41339ccc5bee3/diff:/var/lib/d
ocker/overlay2/4d6e8c40f7c65cbcfa827b62273eb37d2a0bb0407ee161b80523f11c157a52d4/diff:/var/lib/docker/overlay2/434355bebe182fb5c97b07ad8cf7a59b5474f8bc71379ff4f9690ffe31837e63/diff:/var/lib/docker/overlay2/128fb199261eb4fea9e79111861223ddb0d84438454cb7c0b9a61cc73d0796c3/diff:/var/lib/docker/overlay2/11c9cb7e5f15ac43115cb739da1135a53e08dc94ee1da27441e1d6c79eb73e62/diff:/var/lib/docker/overlay2/5e6bf225209b025da3db5a2d8862c3a0ee64417fa809d026e010644fc6619b48/diff:/var/lib/docker/overlay2/265a2d12f86abb8a607a06ec6f225186dfe622f7e23cbbf146a2ceffc7bc9bdc/diff:/var/lib/docker/overlay2/e9140b8aea381db0faf833cd2d12ff40267febe63f7c6bc2fa27384dbadf89bc/diff:/var/lib/docker/overlay2/07d7d0ab38cb08c95279e0706a65a6a4aeff8b5e72d559f8bc3dfcc0895654d6/diff:/var/lib/docker/overlay2/3a3d8346fb066863283ad39cf7f43615d7a1e99dc12aa7e8892d85d1c550bc63/diff:/var/lib/docker/overlay2/e5317b212815c832ebd2451ddd6e1f6922f350c5c1651ccfef7c5a8f34b7c1e3/diff:/var/lib/docker/overlay2/3bd7d98f0f0bf7ad2e29344a6ab4ca3211616e390da35077e76565c2074
df852/diff:/var/lib/docker/overlay2/ff63d2045cce1bb51b201079bba9d2b2a666b7331699b9407801b2fc00792bb3/diff:/var/lib/docker/overlay2/545bdf3f77f20e3db68875eda0a8829facb605119a630956ab79323688a3562a/diff:/var/lib/docker/overlay2/8f9b40124ec5ada59184358938542e028ca1e234ef1f6bce1816becf6ec8354e/diff:/var/lib/docker/overlay2/71b919cd2ece4207294cb577adab54fdba87cd9f7fdca1a53d6f376f87f11151/diff:/var/lib/docker/overlay2/b2d4a34fe28bd395d9b4ba0120173fc9df90c36a8a60a309ec088eca8930768c/diff:/var/lib/docker/overlay2/e986f180cbc34391c1ea6665283859b4c3c3061156ea1ff7196ffd869741e591/diff:/var/lib/docker/overlay2/cb98df203d42ea558f11fe84300b687cb65e6f3401b377a4466c9c8ef276ec59/diff:/var/lib/docker/overlay2/6371f9c0528fb1fb4a17bfcf3a34877fdd6d43dca1b5f06aff998d43d2010c46/diff:/var/lib/docker/overlay2/79ceef8e89c4094715c05bafb8c1427d09fb4a99f71f1a6186299303af352c98/diff:/var/lib/docker/overlay2/e034e76eeaff4e68d5158ffe54b55d122124ad82998be242fa08bec3010a0e64/diff",
	                "MergedDir": "/var/lib/docker/overlay2/54abd23bae110c24f8c0c1064cacc0151a8a996347cb4f1c0df870dd6a302ca2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/54abd23bae110c24f8c0c1064cacc0151a8a996347cb4f1c0df870dd6a302ca2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/54abd23bae110c24f8c0c1064cacc0151a8a996347cb4f1c0df870dd6a302ca2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-20220725123225-44543",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-20220725123225-44543/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-20220725123225-44543",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-20220725123225-44543",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-20220725123225-44543",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7de496e1538968bd459a3ab4b9916934b13633f3ff2cfa28f7cc07968828944a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "49222"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "49223"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "49224"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "49225"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "49226"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7de496e15389",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-20220725123225-44543": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e31d988cf375",
	                        "ingress-addon-legacy-20220725123225-44543"
	                    ],
	                    "NetworkID": "5f29106d3965bba8a654748ac2db40b45790fd50973cedfb3f3c7b02eab3b310",
	                    "EndpointID": "1d0dc1368695069e9e43772f4ddecfa65804d990e610ba34d3a98e53199b612c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220725123225-44543 -n ingress-addon-legacy-20220725123225-44543
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220725123225-44543 -n ingress-addon-legacy-20220725123225-44543: exit status 6 (430.593253ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 12:38:08.456900   49314 status.go:413] kubeconfig endpoint: extract IP: "ingress-addon-legacy-20220725123225-44543" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-20220725123225-44543" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (89.58s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (89.5s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-20220725123225-44543 addons enable ingress-dns --alsologtostderr -v=5
E0725 12:39:22.015944   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/functional-20220725122408-44543/client.crt: no such file or directory
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-20220725123225-44543 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m29.007715605s)

                                                
                                                
-- stdout --
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 12:38:08.515518   49324 out.go:296] Setting OutFile to fd 1 ...
	I0725 12:38:08.515696   49324 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 12:38:08.515702   49324 out.go:309] Setting ErrFile to fd 2...
	I0725 12:38:08.515705   49324 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 12:38:08.515828   49324 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/bin
	I0725 12:38:08.516408   49324 config.go:178] Loaded profile config "ingress-addon-legacy-20220725123225-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0725 12:38:08.516421   49324 addons.go:65] Setting ingress-dns=true in profile "ingress-addon-legacy-20220725123225-44543"
	I0725 12:38:08.516432   49324 addons.go:153] Setting addon ingress-dns=true in "ingress-addon-legacy-20220725123225-44543"
	I0725 12:38:08.516815   49324 host.go:66] Checking if "ingress-addon-legacy-20220725123225-44543" exists ...
	I0725 12:38:08.517283   49324 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220725123225-44543 --format={{.State.Status}}
	I0725 12:38:08.605826   49324 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0725 12:38:08.627402   49324 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I0725 12:38:08.648685   49324 addons.go:345] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0725 12:38:08.648711   49324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I0725 12:38:08.648794   49324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220725123225-44543
	I0725 12:38:08.716110   49324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49222 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/ingress-addon-legacy-20220725123225-44543/id_rsa Username:docker}
	I0725 12:38:08.809257   49324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0725 12:38:08.858570   49324 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:38:08.858592   49324 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:38:09.135815   49324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0725 12:38:09.187576   49324 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:38:09.187594   49324 retry.go:31] will retry after 540.190908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:38:09.730099   49324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0725 12:38:09.781468   49324 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:38:09.781483   49324 retry.go:31] will retry after 655.06503ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:38:10.438861   49324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0725 12:38:10.489710   49324 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:38:10.489725   49324 retry.go:31] will retry after 791.196345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:38:11.281731   49324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0725 12:38:11.333516   49324 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:38:11.333529   49324 retry.go:31] will retry after 1.170244332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:38:12.503941   49324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0725 12:38:12.552844   49324 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:38:12.552860   49324 retry.go:31] will retry after 2.253109428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:38:14.808295   49324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0725 12:38:14.859359   49324 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:38:14.859375   49324 retry.go:31] will retry after 1.610739793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:38:16.472492   49324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0725 12:38:16.526602   49324 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:38:16.526618   49324 retry.go:31] will retry after 2.804311738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:38:19.332999   49324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0725 12:38:19.383178   49324 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:38:19.383195   49324 retry.go:31] will retry after 3.824918958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:38:23.208409   49324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0725 12:38:23.258650   49324 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:38:23.258665   49324 retry.go:31] will retry after 7.69743562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:38:30.956856   49324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0725 12:38:31.009371   49324 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:38:31.009384   49324 retry.go:31] will retry after 14.635568968s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:38:45.647622   49324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0725 12:38:45.701804   49324 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:38:45.701824   49324 retry.go:31] will retry after 28.406662371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:39:14.109655   49324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0725 12:39:14.161944   49324 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:39:14.161961   49324 retry.go:31] will retry after 23.168280436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:39:37.333124   49324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0725 12:39:37.383588   49324 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 12:39:37.405713   49324 out.go:177] 
	W0725 12:39:37.427306   49324 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W0725 12:39:37.427321   49324 out.go:239] * 
	* 
	W0725 12:39:37.430630   49324 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 12:39:37.452395   49324 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-20220725123225-44543
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-20220725123225-44543:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e31d988cf37501e00a0fe669941dd6c6e0aa90b9125c05d4a99ba61c2751f942",
	        "Created": "2022-07-25T19:32:36.963637983Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 40855,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-25T19:32:37.237228576Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/e31d988cf37501e00a0fe669941dd6c6e0aa90b9125c05d4a99ba61c2751f942/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e31d988cf37501e00a0fe669941dd6c6e0aa90b9125c05d4a99ba61c2751f942/hostname",
	        "HostsPath": "/var/lib/docker/containers/e31d988cf37501e00a0fe669941dd6c6e0aa90b9125c05d4a99ba61c2751f942/hosts",
	        "LogPath": "/var/lib/docker/containers/e31d988cf37501e00a0fe669941dd6c6e0aa90b9125c05d4a99ba61c2751f942/e31d988cf37501e00a0fe669941dd6c6e0aa90b9125c05d4a99ba61c2751f942-json.log",
	        "Name": "/ingress-addon-legacy-20220725123225-44543",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-20220725123225-44543:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-20220725123225-44543",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/54abd23bae110c24f8c0c1064cacc0151a8a996347cb4f1c0df870dd6a302ca2-init/diff:/var/lib/docker/overlay2/d54b85ebfe416686b2558e2807bd5eef62d8c3964c35bbc22b4965093a7c57f4/diff:/var/lib/docker/overlay2/5cb3ff6e7921802d23fd6cba2490df2c36857ad3d97eedcf6e537d1527a37f60/diff:/var/lib/docker/overlay2/e9074b1b2294cc5a00d3237646a71c4d30d89748eadbae26dd532d6a475c9851/diff:/var/lib/docker/overlay2/df8929c4859ecf2c8807040b0174a3f0dfa8da01532ea961cd7f4f2f0ccfbfc2/diff:/var/lib/docker/overlay2/177329168f9a1043ce94a2b7b32958779e8f685e85abf8e4c335d03d23295b26/diff:/var/lib/docker/overlay2/9eef19133d2d5e0111523234481658c1f73d7552db59842ad9ca687debda8fb3/diff:/var/lib/docker/overlay2/5c3a4cdf915d1e65e3003feefb9b9e649b432658e093e04971d5cb79e610dba9/diff:/var/lib/docker/overlay2/d1aeff9fecf983db994270b4acf2a2310c790c64d42c58d436aec3238f88b36c/diff:/var/lib/docker/overlay2/580dc19745fb3c1352983857ac5a555954893045702e7867e651e15a2d6d8732/diff:/var/lib/docker/overlay2/6c77b3
2028ad3ef74c2faf89b5080529c0391907ee9c1d7bbc00c25a5340238b/diff:/var/lib/docker/overlay2/e9e20374d05015c7a4f19eb8e14e663122cd2aef66d761bedfdef138551230b8/diff:/var/lib/docker/overlay2/cd6d64933d189089a7aaf08d7c7db50b89cc981d03b8272e4fdedb65495c3e4c/diff:/var/lib/docker/overlay2/768244a14ae888bb2a08747182035c24bd2a6a7a67f1ddae7b4e60efccb01339/diff:/var/lib/docker/overlay2/f90a95060678580a1a90ee9a563996249f7606bffcd06b680ef15234b56ab4a6/diff:/var/lib/docker/overlay2/49310159b9be5ac9bfa1dca18d816535ceb12e228963adafcf71f9d7d52d95ec/diff:/var/lib/docker/overlay2/cef5c40c08cddf01fe5ea252f4a28586c0bd1296ddc38fc37fc7e34af83c3c30/diff:/var/lib/docker/overlay2/b31bdcb9b1a69c5230400a4e1f5650c1cc8f9e27e673ff43708f9a450106733e/diff:/var/lib/docker/overlay2/899516301b3ffc21aee7cb9984f97d944fe5af0f2cd0bc5a5895bf187e4bff72/diff:/var/lib/docker/overlay2/f1bd0d2fc2ce458c0b6364dde56610c3d26bf8dc2cca2ae4e34a46f173f44595/diff:/var/lib/docker/overlay2/9f0da14f9c19d5a719554996b34f3ef80a0378921181aaf89ca41339ccc5bee3/diff:/var/lib/d
ocker/overlay2/4d6e8c40f7c65cbcfa827b62273eb37d2a0bb0407ee161b80523f11c157a52d4/diff:/var/lib/docker/overlay2/434355bebe182fb5c97b07ad8cf7a59b5474f8bc71379ff4f9690ffe31837e63/diff:/var/lib/docker/overlay2/128fb199261eb4fea9e79111861223ddb0d84438454cb7c0b9a61cc73d0796c3/diff:/var/lib/docker/overlay2/11c9cb7e5f15ac43115cb739da1135a53e08dc94ee1da27441e1d6c79eb73e62/diff:/var/lib/docker/overlay2/5e6bf225209b025da3db5a2d8862c3a0ee64417fa809d026e010644fc6619b48/diff:/var/lib/docker/overlay2/265a2d12f86abb8a607a06ec6f225186dfe622f7e23cbbf146a2ceffc7bc9bdc/diff:/var/lib/docker/overlay2/e9140b8aea381db0faf833cd2d12ff40267febe63f7c6bc2fa27384dbadf89bc/diff:/var/lib/docker/overlay2/07d7d0ab38cb08c95279e0706a65a6a4aeff8b5e72d559f8bc3dfcc0895654d6/diff:/var/lib/docker/overlay2/3a3d8346fb066863283ad39cf7f43615d7a1e99dc12aa7e8892d85d1c550bc63/diff:/var/lib/docker/overlay2/e5317b212815c832ebd2451ddd6e1f6922f350c5c1651ccfef7c5a8f34b7c1e3/diff:/var/lib/docker/overlay2/3bd7d98f0f0bf7ad2e29344a6ab4ca3211616e390da35077e76565c2074
df852/diff:/var/lib/docker/overlay2/ff63d2045cce1bb51b201079bba9d2b2a666b7331699b9407801b2fc00792bb3/diff:/var/lib/docker/overlay2/545bdf3f77f20e3db68875eda0a8829facb605119a630956ab79323688a3562a/diff:/var/lib/docker/overlay2/8f9b40124ec5ada59184358938542e028ca1e234ef1f6bce1816becf6ec8354e/diff:/var/lib/docker/overlay2/71b919cd2ece4207294cb577adab54fdba87cd9f7fdca1a53d6f376f87f11151/diff:/var/lib/docker/overlay2/b2d4a34fe28bd395d9b4ba0120173fc9df90c36a8a60a309ec088eca8930768c/diff:/var/lib/docker/overlay2/e986f180cbc34391c1ea6665283859b4c3c3061156ea1ff7196ffd869741e591/diff:/var/lib/docker/overlay2/cb98df203d42ea558f11fe84300b687cb65e6f3401b377a4466c9c8ef276ec59/diff:/var/lib/docker/overlay2/6371f9c0528fb1fb4a17bfcf3a34877fdd6d43dca1b5f06aff998d43d2010c46/diff:/var/lib/docker/overlay2/79ceef8e89c4094715c05bafb8c1427d09fb4a99f71f1a6186299303af352c98/diff:/var/lib/docker/overlay2/e034e76eeaff4e68d5158ffe54b55d122124ad82998be242fa08bec3010a0e64/diff",
	                "MergedDir": "/var/lib/docker/overlay2/54abd23bae110c24f8c0c1064cacc0151a8a996347cb4f1c0df870dd6a302ca2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/54abd23bae110c24f8c0c1064cacc0151a8a996347cb4f1c0df870dd6a302ca2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/54abd23bae110c24f8c0c1064cacc0151a8a996347cb4f1c0df870dd6a302ca2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-20220725123225-44543",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-20220725123225-44543/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-20220725123225-44543",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-20220725123225-44543",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-20220725123225-44543",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7de496e1538968bd459a3ab4b9916934b13633f3ff2cfa28f7cc07968828944a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "49222"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "49223"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "49224"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "49225"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "49226"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7de496e15389",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-20220725123225-44543": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e31d988cf375",
	                        "ingress-addon-legacy-20220725123225-44543"
	                    ],
	                    "NetworkID": "5f29106d3965bba8a654748ac2db40b45790fd50973cedfb3f3c7b02eab3b310",
	                    "EndpointID": "1d0dc1368695069e9e43772f4ddecfa65804d990e610ba34d3a98e53199b612c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220725123225-44543 -n ingress-addon-legacy-20220725123225-44543
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220725123225-44543 -n ingress-addon-legacy-20220725123225-44543: exit status 6 (424.784436ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 12:39:37.961824   49417 status.go:413] kubeconfig endpoint: extract IP: "ingress-addon-legacy-20220725123225-44543" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-20220725123225-44543" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (89.50s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.5s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:158: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-20220725123225-44543
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-20220725123225-44543:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e31d988cf37501e00a0fe669941dd6c6e0aa90b9125c05d4a99ba61c2751f942",
	        "Created": "2022-07-25T19:32:36.963637983Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 40855,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-25T19:32:37.237228576Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/e31d988cf37501e00a0fe669941dd6c6e0aa90b9125c05d4a99ba61c2751f942/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e31d988cf37501e00a0fe669941dd6c6e0aa90b9125c05d4a99ba61c2751f942/hostname",
	        "HostsPath": "/var/lib/docker/containers/e31d988cf37501e00a0fe669941dd6c6e0aa90b9125c05d4a99ba61c2751f942/hosts",
	        "LogPath": "/var/lib/docker/containers/e31d988cf37501e00a0fe669941dd6c6e0aa90b9125c05d4a99ba61c2751f942/e31d988cf37501e00a0fe669941dd6c6e0aa90b9125c05d4a99ba61c2751f942-json.log",
	        "Name": "/ingress-addon-legacy-20220725123225-44543",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-20220725123225-44543:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-20220725123225-44543",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/54abd23bae110c24f8c0c1064cacc0151a8a996347cb4f1c0df870dd6a302ca2-init/diff:/var/lib/docker/overlay2/d54b85ebfe416686b2558e2807bd5eef62d8c3964c35bbc22b4965093a7c57f4/diff:/var/lib/docker/overlay2/5cb3ff6e7921802d23fd6cba2490df2c36857ad3d97eedcf6e537d1527a37f60/diff:/var/lib/docker/overlay2/e9074b1b2294cc5a00d3237646a71c4d30d89748eadbae26dd532d6a475c9851/diff:/var/lib/docker/overlay2/df8929c4859ecf2c8807040b0174a3f0dfa8da01532ea961cd7f4f2f0ccfbfc2/diff:/var/lib/docker/overlay2/177329168f9a1043ce94a2b7b32958779e8f685e85abf8e4c335d03d23295b26/diff:/var/lib/docker/overlay2/9eef19133d2d5e0111523234481658c1f73d7552db59842ad9ca687debda8fb3/diff:/var/lib/docker/overlay2/5c3a4cdf915d1e65e3003feefb9b9e649b432658e093e04971d5cb79e610dba9/diff:/var/lib/docker/overlay2/d1aeff9fecf983db994270b4acf2a2310c790c64d42c58d436aec3238f88b36c/diff:/var/lib/docker/overlay2/580dc19745fb3c1352983857ac5a555954893045702e7867e651e15a2d6d8732/diff:/var/lib/docker/overlay2/6c77b3
2028ad3ef74c2faf89b5080529c0391907ee9c1d7bbc00c25a5340238b/diff:/var/lib/docker/overlay2/e9e20374d05015c7a4f19eb8e14e663122cd2aef66d761bedfdef138551230b8/diff:/var/lib/docker/overlay2/cd6d64933d189089a7aaf08d7c7db50b89cc981d03b8272e4fdedb65495c3e4c/diff:/var/lib/docker/overlay2/768244a14ae888bb2a08747182035c24bd2a6a7a67f1ddae7b4e60efccb01339/diff:/var/lib/docker/overlay2/f90a95060678580a1a90ee9a563996249f7606bffcd06b680ef15234b56ab4a6/diff:/var/lib/docker/overlay2/49310159b9be5ac9bfa1dca18d816535ceb12e228963adafcf71f9d7d52d95ec/diff:/var/lib/docker/overlay2/cef5c40c08cddf01fe5ea252f4a28586c0bd1296ddc38fc37fc7e34af83c3c30/diff:/var/lib/docker/overlay2/b31bdcb9b1a69c5230400a4e1f5650c1cc8f9e27e673ff43708f9a450106733e/diff:/var/lib/docker/overlay2/899516301b3ffc21aee7cb9984f97d944fe5af0f2cd0bc5a5895bf187e4bff72/diff:/var/lib/docker/overlay2/f1bd0d2fc2ce458c0b6364dde56610c3d26bf8dc2cca2ae4e34a46f173f44595/diff:/var/lib/docker/overlay2/9f0da14f9c19d5a719554996b34f3ef80a0378921181aaf89ca41339ccc5bee3/diff:/var/lib/d
ocker/overlay2/4d6e8c40f7c65cbcfa827b62273eb37d2a0bb0407ee161b80523f11c157a52d4/diff:/var/lib/docker/overlay2/434355bebe182fb5c97b07ad8cf7a59b5474f8bc71379ff4f9690ffe31837e63/diff:/var/lib/docker/overlay2/128fb199261eb4fea9e79111861223ddb0d84438454cb7c0b9a61cc73d0796c3/diff:/var/lib/docker/overlay2/11c9cb7e5f15ac43115cb739da1135a53e08dc94ee1da27441e1d6c79eb73e62/diff:/var/lib/docker/overlay2/5e6bf225209b025da3db5a2d8862c3a0ee64417fa809d026e010644fc6619b48/diff:/var/lib/docker/overlay2/265a2d12f86abb8a607a06ec6f225186dfe622f7e23cbbf146a2ceffc7bc9bdc/diff:/var/lib/docker/overlay2/e9140b8aea381db0faf833cd2d12ff40267febe63f7c6bc2fa27384dbadf89bc/diff:/var/lib/docker/overlay2/07d7d0ab38cb08c95279e0706a65a6a4aeff8b5e72d559f8bc3dfcc0895654d6/diff:/var/lib/docker/overlay2/3a3d8346fb066863283ad39cf7f43615d7a1e99dc12aa7e8892d85d1c550bc63/diff:/var/lib/docker/overlay2/e5317b212815c832ebd2451ddd6e1f6922f350c5c1651ccfef7c5a8f34b7c1e3/diff:/var/lib/docker/overlay2/3bd7d98f0f0bf7ad2e29344a6ab4ca3211616e390da35077e76565c2074
df852/diff:/var/lib/docker/overlay2/ff63d2045cce1bb51b201079bba9d2b2a666b7331699b9407801b2fc00792bb3/diff:/var/lib/docker/overlay2/545bdf3f77f20e3db68875eda0a8829facb605119a630956ab79323688a3562a/diff:/var/lib/docker/overlay2/8f9b40124ec5ada59184358938542e028ca1e234ef1f6bce1816becf6ec8354e/diff:/var/lib/docker/overlay2/71b919cd2ece4207294cb577adab54fdba87cd9f7fdca1a53d6f376f87f11151/diff:/var/lib/docker/overlay2/b2d4a34fe28bd395d9b4ba0120173fc9df90c36a8a60a309ec088eca8930768c/diff:/var/lib/docker/overlay2/e986f180cbc34391c1ea6665283859b4c3c3061156ea1ff7196ffd869741e591/diff:/var/lib/docker/overlay2/cb98df203d42ea558f11fe84300b687cb65e6f3401b377a4466c9c8ef276ec59/diff:/var/lib/docker/overlay2/6371f9c0528fb1fb4a17bfcf3a34877fdd6d43dca1b5f06aff998d43d2010c46/diff:/var/lib/docker/overlay2/79ceef8e89c4094715c05bafb8c1427d09fb4a99f71f1a6186299303af352c98/diff:/var/lib/docker/overlay2/e034e76eeaff4e68d5158ffe54b55d122124ad82998be242fa08bec3010a0e64/diff",
	                "MergedDir": "/var/lib/docker/overlay2/54abd23bae110c24f8c0c1064cacc0151a8a996347cb4f1c0df870dd6a302ca2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/54abd23bae110c24f8c0c1064cacc0151a8a996347cb4f1c0df870dd6a302ca2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/54abd23bae110c24f8c0c1064cacc0151a8a996347cb4f1c0df870dd6a302ca2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-20220725123225-44543",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-20220725123225-44543/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-20220725123225-44543",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-20220725123225-44543",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-20220725123225-44543",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7de496e1538968bd459a3ab4b9916934b13633f3ff2cfa28f7cc07968828944a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "49222"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "49223"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "49224"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "49225"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "49226"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7de496e15389",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-20220725123225-44543": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e31d988cf375",
	                        "ingress-addon-legacy-20220725123225-44543"
	                    ],
	                    "NetworkID": "5f29106d3965bba8a654748ac2db40b45790fd50973cedfb3f3c7b02eab3b310",
	                    "EndpointID": "1d0dc1368695069e9e43772f4ddecfa65804d990e610ba34d3a98e53199b612c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220725123225-44543 -n ingress-addon-legacy-20220725123225-44543
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220725123225-44543 -n ingress-addon-legacy-20220725123225-44543: exit status 6 (424.866086ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 12:39:38.458777   49429 status.go:413] kubeconfig endpoint: extract IP: "ingress-addon-legacy-20220725123225-44543" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-20220725123225-44543" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.50s)

                                                
                                    
x
+
TestPreload (266.31s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:48: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-20220725125155-44543 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0
E0725 12:53:01.212426   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/functional-20220725122408-44543/client.crt: no such file or directory
preload_test.go:48: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p test-preload-20220725125155-44543 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0: exit status 109 (4m23.240115443s)

                                                
                                                
-- stdout --
	* [test-preload-20220725125155-44543] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14555
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node test-preload-20220725125155-44543 in cluster test-preload-20220725125155-44543
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.17.0 on Docker 20.10.17 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 12:51:55.248708   53119 out.go:296] Setting OutFile to fd 1 ...
	I0725 12:51:55.249087   53119 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 12:51:55.249098   53119 out.go:309] Setting ErrFile to fd 2...
	I0725 12:51:55.249108   53119 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 12:51:55.249314   53119 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/bin
	I0725 12:51:55.270936   53119 out.go:303] Setting JSON to false
	I0725 12:51:55.287620   53119 start.go:115] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":13887,"bootTime":1658764828,"procs":365,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0725 12:51:55.287722   53119 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0725 12:51:55.309302   53119 out.go:177] * [test-preload-20220725125155-44543] minikube v1.26.0 on Darwin 12.4
	I0725 12:51:55.330547   53119 notify.go:193] Checking for updates...
	I0725 12:51:55.352379   53119 out.go:177]   - MINIKUBE_LOCATION=14555
	I0725 12:51:55.374064   53119 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	I0725 12:51:55.395507   53119 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 12:51:55.417406   53119 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 12:51:55.439319   53119 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube
	I0725 12:51:55.461632   53119 driver.go:365] Setting default libvirt URI to qemu:///system
	I0725 12:51:55.530594   53119 docker.go:137] docker version: linux-20.10.17
	I0725 12:51:55.530710   53119 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 12:51:55.664353   53119 info.go:265] docker info: {ID:DEPZ:X5EQ:JUVI:7TAY:IXPP:M3QT:KGJK:NK3N:YSWM:HAHS:YLUF:RBE6 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:false NGoroutines:46 SystemTime:2022-07-25 19:51:55.607642405 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 12:51:55.708054   53119 out.go:177] * Using the docker driver based on user configuration
	I0725 12:51:55.729204   53119 start.go:284] selected driver: docker
	I0725 12:51:55.729231   53119 start.go:808] validating driver "docker" against <nil>
	I0725 12:51:55.729253   53119 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 12:51:55.732614   53119 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 12:51:55.866113   53119 info.go:265] docker info: {ID:DEPZ:X5EQ:JUVI:7TAY:IXPP:M3QT:KGJK:NK3N:YSWM:HAHS:YLUF:RBE6 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:false NGoroutines:46 SystemTime:2022-07-25 19:51:55.811108277 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 12:51:55.866225   53119 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0725 12:51:55.866368   53119 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 12:51:55.888363   53119 out.go:177] * Using Docker Desktop driver with root privileges
	I0725 12:51:55.909686   53119 cni.go:95] Creating CNI manager for ""
	I0725 12:51:55.909716   53119 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 12:51:55.909751   53119 start_flags.go:310] config:
	{Name:test-preload-20220725125155-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220725125155-44543 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 12:51:55.933063   53119 out.go:177] * Starting control plane node test-preload-20220725125155-44543 in cluster test-preload-20220725125155-44543
	I0725 12:51:55.978811   53119 cache.go:120] Beginning downloading kic base image for docker with docker
	I0725 12:51:55.998933   53119 out.go:177] * Pulling base image ...
	I0725 12:51:56.042102   53119 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I0725 12:51:56.042153   53119 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0725 12:51:56.042469   53119 cache.go:107] acquiring lock: {Name:mk3cfff2a3ebc7f66abd20c921baf88f4c733b0e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 12:51:56.043369   53119 cache.go:107] acquiring lock: {Name:mkc34aa16978e2733dda0e95e647824b91997c76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 12:51:56.044337   53119 cache.go:107] acquiring lock: {Name:mkad5d992d7d0ddfedc128f1b3c6827491c37bd5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 12:51:56.044444   53119 cache.go:107] acquiring lock: {Name:mkf20af9091f934695cd30560540215778489604 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 12:51:56.044352   53119 cache.go:107] acquiring lock: {Name:mk0da0330ce13f36aad54642fb3b8194b36b2f07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 12:51:56.044453   53119 cache.go:107] acquiring lock: {Name:mke04307c82246ffbba142de4d50771cc786f9f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 12:51:56.044658   53119 cache.go:107] acquiring lock: {Name:mk3c1c4f4e4b5053bcfe099795f4d4ad475346a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 12:51:56.045437   53119 cache.go:107] acquiring lock: {Name:mk17f87b6be4b0c38ad7104e6b038f70f8b4be35 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 12:51:56.045596   53119 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0725 12:51:56.045640   53119 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
	I0725 12:51:56.045654   53119 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 3.169753ms
	I0725 12:51:56.045683   53119 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0725 12:51:56.045722   53119 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/test-preload-20220725125155-44543/config.json ...
	I0725 12:51:56.045740   53119 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0725 12:51:56.045766   53119 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/test-preload-20220725125155-44543/config.json: {Name:mka3f9aed5b38bdb48239c5803e6f85990326944 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 12:51:56.045835   53119 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0725 12:51:56.045941   53119 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0725 12:51:56.046006   53119 image.go:134] retrieving image: k8s.gcr.io/etcd:3.4.3-0
	I0725 12:51:56.046053   53119 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.17.0
	I0725 12:51:56.046107   53119 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.5
	I0725 12:51:56.053075   53119 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.17.0: Error: No such image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0725 12:51:56.053100   53119 image.go:177] daemon lookup for k8s.gcr.io/coredns:1.6.5: Error: No such image: k8s.gcr.io/coredns:1.6.5
	I0725 12:51:56.053487   53119 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.17.0: Error: No such image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0725 12:51:56.054364   53119 image.go:177] daemon lookup for k8s.gcr.io/pause:3.1: Error: No such image: k8s.gcr.io/pause:3.1
	I0725 12:51:56.054540   53119 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.17.0: Error: No such image: k8s.gcr.io/kube-proxy:v1.17.0
	I0725 12:51:56.054678   53119 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.17.0: Error: No such image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0725 12:51:56.055675   53119 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.4.3-0: Error: No such image: k8s.gcr.io/etcd:3.4.3-0
	I0725 12:51:56.111234   53119 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0725 12:51:56.111255   53119 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0725 12:51:56.111269   53119 cache.go:208] Successfully downloaded all kic artifacts
	I0725 12:51:56.111319   53119 start.go:370] acquiring machines lock for test-preload-20220725125155-44543: {Name:mkd1cfd2d45dd671be72cd0221fb4197a233f903 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 12:51:56.111459   53119 start.go:374] acquired machines lock for "test-preload-20220725125155-44543" in 128.084µs
	I0725 12:51:56.111484   53119 start.go:92] Provisioning new machine with config: &{Name:test-preload-20220725125155-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220725125155-44543 Name
space:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 12:51:56.111587   53119 start.go:132] createHost starting for "" (driver="docker")
	I0725 12:51:56.135897   53119 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0725 12:51:56.136193   53119 start.go:166] libmachine.API.Create for "test-preload-20220725125155-44543" (driver="docker")
	I0725 12:51:56.136233   53119 client.go:168] LocalClient.Create starting
	I0725 12:51:56.136326   53119 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem
	I0725 12:51:56.136369   53119 main.go:134] libmachine: Decoding PEM data...
	I0725 12:51:56.136382   53119 main.go:134] libmachine: Parsing certificate...
	I0725 12:51:56.136441   53119 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem
	I0725 12:51:56.136464   53119 main.go:134] libmachine: Decoding PEM data...
	I0725 12:51:56.136476   53119 main.go:134] libmachine: Parsing certificate...
	I0725 12:51:56.136928   53119 cli_runner.go:164] Run: docker network inspect test-preload-20220725125155-44543 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0725 12:51:56.200699   53119 cli_runner.go:211] docker network inspect test-preload-20220725125155-44543 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0725 12:51:56.200801   53119 network_create.go:272] running [docker network inspect test-preload-20220725125155-44543] to gather additional debugging logs...
	I0725 12:51:56.200821   53119 cli_runner.go:164] Run: docker network inspect test-preload-20220725125155-44543
	W0725 12:51:56.264787   53119 cli_runner.go:211] docker network inspect test-preload-20220725125155-44543 returned with exit code 1
	I0725 12:51:56.264815   53119 network_create.go:275] error running [docker network inspect test-preload-20220725125155-44543]: docker network inspect test-preload-20220725125155-44543: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: test-preload-20220725125155-44543
	I0725 12:51:56.264824   53119 network_create.go:277] output of [docker network inspect test-preload-20220725125155-44543]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: test-preload-20220725125155-44543
	
	** /stderr **
	I0725 12:51:56.264873   53119 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0725 12:51:56.330393   53119 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000bde8d0] misses:0}
	I0725 12:51:56.330430   53119 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0725 12:51:56.330448   53119 network_create.go:115] attempt to create docker network test-preload-20220725125155-44543 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0725 12:51:56.330520   53119 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=test-preload-20220725125155-44543 test-preload-20220725125155-44543
	W0725 12:51:56.393525   53119 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=test-preload-20220725125155-44543 test-preload-20220725125155-44543 returned with exit code 1
	W0725 12:51:56.393556   53119 network_create.go:107] failed to create docker network test-preload-20220725125155-44543 192.168.49.0/24, will retry: subnet is taken
	I0725 12:51:56.393786   53119 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000bde8d0] amended:false}} dirty:map[] misses:0}
	I0725 12:51:56.393800   53119 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0725 12:51:56.394011   53119 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000bde8d0] amended:true}} dirty:map[192.168.49.0:0xc000bde8d0 192.168.58.0:0xc000cc4420] misses:0}
	I0725 12:51:56.394025   53119 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0725 12:51:56.394033   53119 network_create.go:115] attempt to create docker network test-preload-20220725125155-44543 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0725 12:51:56.394084   53119 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=test-preload-20220725125155-44543 test-preload-20220725125155-44543
	W0725 12:51:56.457389   53119 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=test-preload-20220725125155-44543 test-preload-20220725125155-44543 returned with exit code 1
	W0725 12:51:56.457413   53119 network_create.go:107] failed to create docker network test-preload-20220725125155-44543 192.168.58.0/24, will retry: subnet is taken
	I0725 12:51:56.457679   53119 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000bde8d0] amended:true}} dirty:map[192.168.49.0:0xc000bde8d0 192.168.58.0:0xc000cc4420] misses:1}
	I0725 12:51:56.457697   53119 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0725 12:51:56.457889   53119 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000bde8d0] amended:true}} dirty:map[192.168.49.0:0xc000bde8d0 192.168.58.0:0xc000cc4420 192.168.67.0:0xc000cc4478] misses:1}
	I0725 12:51:56.457907   53119 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0725 12:51:56.457915   53119 network_create.go:115] attempt to create docker network test-preload-20220725125155-44543 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0725 12:51:56.457971   53119 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=test-preload-20220725125155-44543 test-preload-20220725125155-44543
	I0725 12:51:56.550694   53119 network_create.go:99] docker network test-preload-20220725125155-44543 192.168.67.0/24 created
	I0725 12:51:56.550722   53119 kic.go:106] calculated static IP "192.168.67.2" for the "test-preload-20220725125155-44543" container
	I0725 12:51:56.550797   53119 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0725 12:51:56.591032   53119 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5
	I0725 12:51:56.592101   53119 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1
	I0725 12:51:56.592490   53119 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0
	I0725 12:51:56.593145   53119 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0
	I0725 12:51:56.614278   53119 cli_runner.go:164] Run: docker volume create test-preload-20220725125155-44543 --label name.minikube.sigs.k8s.io=test-preload-20220725125155-44543 --label created_by.minikube.sigs.k8s.io=true
	I0725 12:51:56.677289   53119 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 exists
	I0725 12:51:56.677379   53119 cache.go:96] cache image "k8s.gcr.io/pause:3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1" took 633.600441ms
	I0725 12:51:56.677402   53119 cache.go:80] save to tar file k8s.gcr.io/pause:3.1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 succeeded
	I0725 12:51:56.678307   53119 oci.go:103] Successfully created a docker volume test-preload-20220725125155-44543
	I0725 12:51:56.678379   53119 cli_runner.go:164] Run: docker run --rm --name test-preload-20220725125155-44543-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=test-preload-20220725125155-44543 --entrypoint /usr/bin/test -v test-preload-20220725125155-44543:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 -d /var/lib
	I0725 12:51:56.686719   53119 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0
	I0725 12:51:56.687222   53119 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0
	I0725 12:51:56.809223   53119 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0
	I0725 12:51:57.123017   53119 oci.go:107] Successfully prepared a docker volume test-preload-20220725125155-44543
	I0725 12:51:57.123044   53119 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I0725 12:51:57.123117   53119 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0725 12:51:57.269373   53119 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname test-preload-20220725125155-44543 --name test-preload-20220725125155-44543 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=test-preload-20220725125155-44543 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=test-preload-20220725125155-44543 --network test-preload-20220725125155-44543 --ip 192.168.67.2 --volume test-preload-20220725125155-44543:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842
	I0725 12:51:57.485891   53119 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5 exists
	I0725 12:51:57.485916   53119 cache.go:96] cache image "k8s.gcr.io/coredns:1.6.5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5" took 1.441438672s
	I0725 12:51:57.485928   53119 cache.go:80] save to tar file k8s.gcr.io/coredns:1.6.5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5 succeeded
	I0725 12:51:57.691943   53119 cli_runner.go:164] Run: docker container inspect test-preload-20220725125155-44543 --format={{.State.Running}}
	I0725 12:51:57.768912   53119 cli_runner.go:164] Run: docker container inspect test-preload-20220725125155-44543 --format={{.State.Status}}
	I0725 12:51:57.852111   53119 cli_runner.go:164] Run: docker exec test-preload-20220725125155-44543 stat /var/lib/dpkg/alternatives/iptables
	I0725 12:51:57.987054   53119 oci.go:144] the created container "test-preload-20220725125155-44543" has a running status.
	I0725 12:51:57.987126   53119 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/test-preload-20220725125155-44543/id_rsa...
	I0725 12:51:58.291833   53119 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/test-preload-20220725125155-44543/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0725 12:51:58.407239   53119 cli_runner.go:164] Run: docker container inspect test-preload-20220725125155-44543 --format={{.State.Status}}
	I0725 12:51:58.420725   53119 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0 exists
	I0725 12:51:58.420754   53119 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0" took 2.376471391s
	I0725 12:51:58.420771   53119 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0 succeeded
	I0725 12:51:58.479411   53119 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0725 12:51:58.479426   53119 kic_runner.go:114] Args: [docker exec --privileged test-preload-20220725125155-44543 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0725 12:51:58.600074   53119 cli_runner.go:164] Run: docker container inspect test-preload-20220725125155-44543 --format={{.State.Status}}
	I0725 12:51:58.671638   53119 machine.go:88] provisioning docker machine ...
	I0725 12:51:58.671667   53119 ubuntu.go:169] provisioning hostname "test-preload-20220725125155-44543"
	I0725 12:51:58.671728   53119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220725125155-44543
	I0725 12:51:58.744740   53119 main.go:134] libmachine: Using SSH client type: native
	I0725 12:51:58.744927   53119 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52746 <nil> <nil>}
	I0725 12:51:58.744940   53119 main.go:134] libmachine: About to run SSH command:
	sudo hostname test-preload-20220725125155-44543 && echo "test-preload-20220725125155-44543" | sudo tee /etc/hostname
	I0725 12:51:58.783984   53119 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0 exists
	I0725 12:51:58.784001   53119 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0" took 2.741491849s
	I0725 12:51:58.784017   53119 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0 succeeded
	I0725 12:51:58.873120   53119 main.go:134] libmachine: SSH cmd err, output: <nil>: test-preload-20220725125155-44543
	
	I0725 12:51:58.873195   53119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220725125155-44543
	I0725 12:51:58.887350   53119 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0 exists
	I0725 12:51:58.887383   53119 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0" took 2.843329081s
	I0725 12:51:58.887397   53119 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0 succeeded
	I0725 12:51:58.944987   53119 main.go:134] libmachine: Using SSH client type: native
	I0725 12:51:58.945127   53119 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52746 <nil> <nil>}
	I0725 12:51:58.945142   53119 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-20220725125155-44543' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-20220725125155-44543/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-20220725125155-44543' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 12:51:59.062749   53119 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 12:51:59.062772   53119 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube}
	I0725 12:51:59.062803   53119 ubuntu.go:177] setting up certificates
	I0725 12:51:59.062809   53119 provision.go:83] configureAuth start
	I0725 12:51:59.062875   53119 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-20220725125155-44543
	I0725 12:51:59.133050   53119 provision.go:138] copyHostCerts
	I0725 12:51:59.133151   53119 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem, removing ...
	I0725 12:51:59.133164   53119 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem
	I0725 12:51:59.133265   53119 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem (1082 bytes)
	I0725 12:51:59.133490   53119 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem, removing ...
	I0725 12:51:59.133501   53119 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem
	I0725 12:51:59.133562   53119 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem (1123 bytes)
	I0725 12:51:59.133696   53119 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem, removing ...
	I0725 12:51:59.133703   53119 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem
	I0725 12:51:59.133758   53119 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem (1675 bytes)
	I0725 12:51:59.133866   53119 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem org=jenkins.test-preload-20220725125155-44543 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube test-preload-20220725125155-44543]
	I0725 12:51:59.221107   53119 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0 exists
	I0725 12:51:59.221149   53119 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0" took 3.176826762s
	I0725 12:51:59.221167   53119 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0 succeeded
	I0725 12:51:59.297568   53119 provision.go:172] copyRemoteCerts
	I0725 12:51:59.297628   53119 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 12:51:59.297670   53119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220725125155-44543
	I0725 12:51:59.388735   53119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52746 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/test-preload-20220725125155-44543/id_rsa Username:docker}
	I0725 12:51:59.473984   53119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0725 12:51:59.490808   53119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0725 12:51:59.508571   53119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0725 12:51:59.525740   53119 provision.go:86] duration metric: configureAuth took 462.908102ms
	I0725 12:51:59.525753   53119 ubuntu.go:193] setting minikube options for container-runtime
	I0725 12:51:59.525900   53119 config.go:178] Loaded profile config "test-preload-20220725125155-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I0725 12:51:59.525966   53119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220725125155-44543
	I0725 12:51:59.549691   53119 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0 exists
	I0725 12:51:59.549715   53119 cache.go:96] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0" took 3.505266931s
	I0725 12:51:59.549728   53119 cache.go:80] save to tar file k8s.gcr.io/etcd:3.4.3-0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0 succeeded
	I0725 12:51:59.549746   53119 cache.go:87] Successfully saved all images to host disk.
	I0725 12:51:59.594551   53119 main.go:134] libmachine: Using SSH client type: native
	I0725 12:51:59.594696   53119 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52746 <nil> <nil>}
	I0725 12:51:59.594708   53119 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 12:51:59.713704   53119 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0725 12:51:59.713715   53119 ubuntu.go:71] root file system type: overlay
	I0725 12:51:59.713869   53119 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 12:51:59.713942   53119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220725125155-44543
	I0725 12:51:59.784007   53119 main.go:134] libmachine: Using SSH client type: native
	I0725 12:51:59.784175   53119 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52746 <nil> <nil>}
	I0725 12:51:59.784229   53119 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 12:51:59.912618   53119 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 12:51:59.912727   53119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220725125155-44543
	I0725 12:51:59.980049   53119 main.go:134] libmachine: Using SSH client type: native
	I0725 12:51:59.980278   53119 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 52746 <nil> <nil>}
	I0725 12:51:59.980294   53119 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 12:52:00.562392   53119 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-06-06 23:01:03.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-07-25 19:51:59.912061194 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0725 12:52:00.562412   53119 machine.go:91] provisioned docker machine in 1.890715665s
	I0725 12:52:00.562417   53119 client.go:171] LocalClient.Create took 4.426079131s
	I0725 12:52:00.562452   53119 start.go:174] duration metric: libmachine.API.Create for "test-preload-20220725125155-44543" took 4.426155175s
	I0725 12:52:00.562468   53119 start.go:307] post-start starting for "test-preload-20220725125155-44543" (driver="docker")
	I0725 12:52:00.562474   53119 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 12:52:00.562547   53119 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 12:52:00.562593   53119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220725125155-44543
	I0725 12:52:00.630922   53119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52746 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/test-preload-20220725125155-44543/id_rsa Username:docker}
	I0725 12:52:00.719025   53119 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 12:52:00.722380   53119 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0725 12:52:00.722397   53119 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0725 12:52:00.722404   53119 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0725 12:52:00.722411   53119 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0725 12:52:00.722419   53119 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/addons for local assets ...
	I0725 12:52:00.722538   53119 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files for local assets ...
	I0725 12:52:00.722688   53119 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem -> 445432.pem in /etc/ssl/certs
	I0725 12:52:00.722855   53119 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 12:52:00.729621   53119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem --> /etc/ssl/certs/445432.pem (1708 bytes)
	I0725 12:52:00.746399   53119 start.go:310] post-start completed in 183.917678ms
	I0725 12:52:00.746898   53119 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-20220725125155-44543
	I0725 12:52:00.814436   53119 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/test-preload-20220725125155-44543/config.json ...
	I0725 12:52:00.814818   53119 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 12:52:00.814887   53119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220725125155-44543
	I0725 12:52:00.881354   53119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52746 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/test-preload-20220725125155-44543/id_rsa Username:docker}
	I0725 12:52:00.966212   53119 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0725 12:52:00.970305   53119 start.go:135] duration metric: createHost completed in 4.858600043s
	I0725 12:52:00.970321   53119 start.go:82] releasing machines lock for "test-preload-20220725125155-44543", held for 4.858742924s
	I0725 12:52:00.970386   53119 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-20220725125155-44543
	I0725 12:52:01.037463   53119 ssh_runner.go:195] Run: systemctl --version
	I0725 12:52:01.037496   53119 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0725 12:52:01.037541   53119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220725125155-44543
	I0725 12:52:01.037560   53119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220725125155-44543
	I0725 12:52:01.109515   53119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52746 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/test-preload-20220725125155-44543/id_rsa Username:docker}
	I0725 12:52:01.110332   53119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52746 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/test-preload-20220725125155-44543/id_rsa Username:docker}
	I0725 12:52:01.325692   53119 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 12:52:01.335494   53119 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0725 12:52:01.335545   53119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 12:52:01.344493   53119 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 12:52:01.356840   53119 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 12:52:01.425181   53119 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 12:52:01.487102   53119 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 12:52:01.554246   53119 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 12:52:01.752117   53119 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 12:52:01.787993   53119 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 12:52:01.847848   53119 out.go:204] * Preparing Kubernetes v1.17.0 on Docker 20.10.17 ...
	I0725 12:52:01.848013   53119 cli_runner.go:164] Run: docker exec -t test-preload-20220725125155-44543 dig +short host.docker.internal
	I0725 12:52:01.978536   53119 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0725 12:52:01.978637   53119 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0725 12:52:01.982696   53119 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 12:52:01.992671   53119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" test-preload-20220725125155-44543
	I0725 12:52:02.059881   53119 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I0725 12:52:02.059942   53119 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 12:52:02.088991   53119 docker.go:611] Got preloaded images: 
	I0725 12:52:02.089004   53119 docker.go:617] k8s.gcr.io/kube-apiserver:v1.17.0 wasn't preloaded
	I0725 12:52:02.089009   53119 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.17.0 k8s.gcr.io/kube-controller-manager:v1.17.0 k8s.gcr.io/kube-scheduler:v1.17.0 k8s.gcr.io/kube-proxy:v1.17.0 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.4.3-0 k8s.gcr.io/coredns:1.6.5 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0725 12:52:02.096096   53119 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.17.0
	I0725 12:52:02.096561   53119 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0725 12:52:02.097087   53119 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 12:52:02.097447   53119 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0725 12:52:02.097990   53119 image.go:134] retrieving image: k8s.gcr.io/etcd:3.4.3-0
	I0725 12:52:02.098591   53119 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0725 12:52:02.099127   53119 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
	I0725 12:52:02.099339   53119 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.5
	I0725 12:52:02.102783   53119 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.17.0: Error: No such image: k8s.gcr.io/kube-proxy:v1.17.0
	I0725 12:52:02.104009   53119 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.17.0: Error: No such image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0725 12:52:02.104153   53119 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.17.0: Error: No such image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0725 12:52:02.105060   53119 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 12:52:02.105178   53119 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.4.3-0: Error: No such image: k8s.gcr.io/etcd:3.4.3-0
	I0725 12:52:02.105255   53119 image.go:177] daemon lookup for k8s.gcr.io/pause:3.1: Error: No such image: k8s.gcr.io/pause:3.1
	I0725 12:52:02.105537   53119 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.17.0: Error: No such image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0725 12:52:02.106939   53119 image.go:177] daemon lookup for k8s.gcr.io/coredns:1.6.5: Error: No such image: k8s.gcr.io/coredns:1.6.5
	I0725 12:52:02.523768   53119 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-proxy:v1.17.0
	I0725 12:52:02.549942   53119 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.17.0
	I0725 12:52:02.553051   53119 cache_images.go:116] "k8s.gcr.io/kube-proxy:v1.17.0" needs transfer: "k8s.gcr.io/kube-proxy:v1.17.0" does not exist at hash "7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19" in container runtime
	I0725 12:52:02.553081   53119 docker.go:292] Removing image: k8s.gcr.io/kube-proxy:v1.17.0
	I0725 12:52:02.553132   53119 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-proxy:v1.17.0
	I0725 12:52:02.580833   53119 cache_images.go:116] "k8s.gcr.io/kube-controller-manager:v1.17.0" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.17.0" does not exist at hash "5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056" in container runtime
	I0725 12:52:02.580858   53119 docker.go:292] Removing image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0725 12:52:02.580916   53119 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-controller-manager:v1.17.0
	I0725 12:52:02.584169   53119 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0
	I0725 12:52:02.584296   53119 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.17.0
	I0725 12:52:02.610896   53119 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0
	I0725 12:52:02.610933   53119 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.17.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.17.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.17.0': No such file or directory
	I0725 12:52:02.610949   53119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0 --> /var/lib/minikube/images/kube-proxy_v1.17.0 (48705536 bytes)
	I0725 12:52:02.611009   53119 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.17.0
	I0725 12:52:02.616542   53119 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.17.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.17.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.17.0': No such file or directory
	I0725 12:52:02.616577   53119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0 --> /var/lib/minikube/images/kube-controller-manager_v1.17.0 (48791552 bytes)
	I0725 12:52:02.618438   53119 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.17.0
	I0725 12:52:02.661450   53119 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/etcd:3.4.3-0
	I0725 12:52:02.689818   53119 cache_images.go:116] "k8s.gcr.io/kube-scheduler:v1.17.0" needs transfer: "k8s.gcr.io/kube-scheduler:v1.17.0" does not exist at hash "78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28" in container runtime
	I0725 12:52:02.689850   53119 docker.go:292] Removing image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0725 12:52:02.689919   53119 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-scheduler:v1.17.0
	I0725 12:52:02.749871   53119 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/pause:3.1
	I0725 12:52:02.760813   53119 cache_images.go:116] "k8s.gcr.io/etcd:3.4.3-0" needs transfer: "k8s.gcr.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0725 12:52:02.760842   53119 docker.go:292] Removing image: k8s.gcr.io/etcd:3.4.3-0
	I0725 12:52:02.760897   53119 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/etcd:3.4.3-0
	I0725 12:52:02.774510   53119 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-apiserver:v1.17.0
	I0725 12:52:02.789969   53119 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0
	I0725 12:52:02.790105   53119 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.17.0
	I0725 12:52:02.845087   53119 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 12:52:02.852564   53119 cache_images.go:116] "k8s.gcr.io/pause:3.1" needs transfer: "k8s.gcr.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0725 12:52:02.852588   53119 docker.go:292] Removing image: k8s.gcr.io/pause:3.1
	I0725 12:52:02.852642   53119 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/pause:3.1
	I0725 12:52:02.866284   53119 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/coredns:1.6.5
	I0725 12:52:02.886586   53119 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0
	I0725 12:52:02.886771   53119 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.4.3-0
	I0725 12:52:02.900550   53119 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.17.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.17.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.17.0': No such file or directory
	I0725 12:52:02.900595   53119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0 --> /var/lib/minikube/images/kube-scheduler_v1.17.0 (33822208 bytes)
	I0725 12:52:02.907266   53119 cache_images.go:116] "k8s.gcr.io/kube-apiserver:v1.17.0" needs transfer: "k8s.gcr.io/kube-apiserver:v1.17.0" does not exist at hash "0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2" in container runtime
	I0725 12:52:02.907302   53119 docker.go:292] Removing image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0725 12:52:02.907360   53119 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-apiserver:v1.17.0
	I0725 12:52:02.949977   53119 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0725 12:52:02.950050   53119 docker.go:292] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 12:52:02.950126   53119 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 12:52:02.972656   53119 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1
	I0725 12:52:02.972832   53119 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.1
	I0725 12:52:03.020158   53119 cache_images.go:116] "k8s.gcr.io/coredns:1.6.5" needs transfer: "k8s.gcr.io/coredns:1.6.5" does not exist at hash "70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61" in container runtime
	I0725 12:52:03.020191   53119 docker.go:292] Removing image: k8s.gcr.io/coredns:1.6.5
	I0725 12:52:03.020206   53119 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.4.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.4.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/etcd_3.4.3-0': No such file or directory
	I0725 12:52:03.020233   53119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0 --> /var/lib/minikube/images/etcd_3.4.3-0 (100950016 bytes)
	I0725 12:52:03.020256   53119 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/coredns:1.6.5
	I0725 12:52:03.020265   53119 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0
	I0725 12:52:03.020389   53119 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.17.0
	I0725 12:52:03.063109   53119 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0725 12:52:03.063139   53119 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/pause_3.1': No such file or directory
	I0725 12:52:03.063162   53119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 --> /var/lib/minikube/images/pause_3.1 (318976 bytes)
	I0725 12:52:03.063237   53119 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0725 12:52:03.118601   53119 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5
	I0725 12:52:03.118665   53119 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.17.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.17.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.17.0': No such file or directory
	I0725 12:52:03.118689   53119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0 --> /var/lib/minikube/images/kube-apiserver_v1.17.0 (50629632 bytes)
	I0725 12:52:03.118766   53119 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_1.6.5
	I0725 12:52:03.133928   53119 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0725 12:52:03.133965   53119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0725 12:52:03.187422   53119 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_1.6.5: stat -c "%s %y" /var/lib/minikube/images/coredns_1.6.5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/coredns_1.6.5': No such file or directory
	I0725 12:52:03.187464   53119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5 --> /var/lib/minikube/images/coredns_1.6.5 (13241856 bytes)
	I0725 12:52:03.199390   53119 docker.go:259] Loading image: /var/lib/minikube/images/pause_3.1
	I0725 12:52:03.199414   53119 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.1 | docker load"
	I0725 12:52:03.453032   53119 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 from cache
	I0725 12:52:04.207135   53119 docker.go:259] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0725 12:52:04.207151   53119 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0725 12:52:04.859291   53119 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0725 12:52:04.859316   53119 docker.go:259] Loading image: /var/lib/minikube/images/coredns_1.6.5
	I0725 12:52:04.859332   53119 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_1.6.5 | docker load"
	I0725 12:52:05.705385   53119 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5 from cache
	I0725 12:52:06.099344   53119 docker.go:259] Loading image: /var/lib/minikube/images/kube-scheduler_v1.17.0
	I0725 12:52:06.099362   53119 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.17.0 | docker load"
	I0725 12:52:08.134688   53119 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.17.0 | docker load": (2.035265557s)
	I0725 12:52:08.134703   53119 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0 from cache
	I0725 12:52:08.134725   53119 docker.go:259] Loading image: /var/lib/minikube/images/kube-proxy_v1.17.0
	I0725 12:52:08.134736   53119 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.17.0 | docker load"
	I0725 12:52:09.037210   53119 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0 from cache
	I0725 12:52:09.037250   53119 docker.go:259] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.17.0
	I0725 12:52:09.037264   53119 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.17.0 | docker load"
	I0725 12:52:10.069118   53119 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.17.0 | docker load": (1.031814849s)
	I0725 12:52:10.069132   53119 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0 from cache
	I0725 12:52:10.069148   53119 docker.go:259] Loading image: /var/lib/minikube/images/kube-apiserver_v1.17.0
	I0725 12:52:10.069160   53119 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.17.0 | docker load"
	I0725 12:52:11.113126   53119 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.17.0 | docker load": (1.043928455s)
	I0725 12:52:11.113140   53119 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0 from cache
	I0725 12:52:11.113159   53119 docker.go:259] Loading image: /var/lib/minikube/images/etcd_3.4.3-0
	I0725 12:52:11.113177   53119 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.4.3-0 | docker load"
	I0725 12:52:14.115582   53119 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.4.3-0 | docker load": (3.002322437s)
	I0725 12:52:14.115596   53119 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0 from cache
	I0725 12:52:14.115619   53119 cache_images.go:123] Successfully loaded all cached images
	I0725 12:52:14.115624   53119 cache_images.go:92] LoadImages completed in 12.026335928s
	I0725 12:52:14.115695   53119 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0725 12:52:14.190435   53119 cni.go:95] Creating CNI manager for ""
	I0725 12:52:14.190446   53119 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 12:52:14.190462   53119 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0725 12:52:14.190476   53119 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.17.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-20220725125155-44543 NodeName:test-preload-20220725125155-44543 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFil
e:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0725 12:52:14.190571   53119 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "test-preload-20220725125155-44543"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.17.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 12:52:14.190640   53119 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.17.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=test-preload-20220725125155-44543 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220725125155-44543 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0725 12:52:14.190699   53119 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.17.0
	I0725 12:52:14.198246   53119 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.17.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.17.0': No such file or directory
	
	Initiating transfer...
	I0725 12:52:14.198291   53119 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.17.0
	I0725 12:52:14.205316   53119 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubelet.sha256 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/linux/amd64/v1.17.0/kubelet
	I0725 12:52:14.205317   53119 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubeadm.sha256 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/linux/amd64/v1.17.0/kubeadm
	I0725 12:52:14.205319   53119 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/linux/amd64/v1.17.0/kubectl
	I0725 12:52:15.439709   53119 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubectl
	I0725 12:52:15.444216   53119 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.17.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.0/kubectl': No such file or directory
	I0725 12:52:15.444240   53119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/linux/amd64/v1.17.0/kubectl --> /var/lib/minikube/binaries/v1.17.0/kubectl (43495424 bytes)
	I0725 12:52:15.542201   53119 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubeadm
	I0725 12:52:15.601585   53119 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.17.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.0/kubeadm': No such file or directory
	I0725 12:52:15.601610   53119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/linux/amd64/v1.17.0/kubeadm --> /var/lib/minikube/binaries/v1.17.0/kubeadm (39342080 bytes)
	I0725 12:52:16.617054   53119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 12:52:16.678567   53119 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubelet
	I0725 12:52:16.736691   53119 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.17.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.0/kubelet': No such file or directory
	I0725 12:52:16.736721   53119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/linux/amd64/v1.17.0/kubelet --> /var/lib/minikube/binaries/v1.17.0/kubelet (111560216 bytes)
	I0725 12:52:19.584406   53119 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 12:52:19.591427   53119 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0725 12:52:19.604544   53119 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 12:52:19.617823   53119 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I0725 12:52:19.630628   53119 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0725 12:52:19.634383   53119 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 12:52:19.644354   53119 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/test-preload-20220725125155-44543 for IP: 192.168.67.2
	I0725 12:52:19.644459   53119 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.key
	I0725 12:52:19.644506   53119 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.key
	I0725 12:52:19.644543   53119 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/test-preload-20220725125155-44543/client.key
	I0725 12:52:19.644555   53119 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/test-preload-20220725125155-44543/client.crt with IP's: []
	I0725 12:52:20.344649   53119 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/test-preload-20220725125155-44543/client.crt ...
	I0725 12:52:20.344661   53119 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/test-preload-20220725125155-44543/client.crt: {Name:mk93ec87f4f24e974ab64b612d6f429aa4de15a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 12:52:20.344934   53119 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/test-preload-20220725125155-44543/client.key ...
	I0725 12:52:20.344941   53119 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/test-preload-20220725125155-44543/client.key: {Name:mk68f15911bce6b4189121630a02530153af09d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 12:52:20.345129   53119 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/test-preload-20220725125155-44543/apiserver.key.c7fa3a9e
	I0725 12:52:20.345144   53119 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/test-preload-20220725125155-44543/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0725 12:52:20.504348   53119 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/test-preload-20220725125155-44543/apiserver.crt.c7fa3a9e ...
	I0725 12:52:20.504359   53119 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/test-preload-20220725125155-44543/apiserver.crt.c7fa3a9e: {Name:mkef0ccad68e42c20858abcf1e2114d20257fc17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 12:52:20.504623   53119 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/test-preload-20220725125155-44543/apiserver.key.c7fa3a9e ...
	I0725 12:52:20.504633   53119 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/test-preload-20220725125155-44543/apiserver.key.c7fa3a9e: {Name:mk1d9f5ee3dc35783db82cb526c2b86b9ba7d1de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 12:52:20.504807   53119 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/test-preload-20220725125155-44543/apiserver.crt.c7fa3a9e -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/test-preload-20220725125155-44543/apiserver.crt
	I0725 12:52:20.504952   53119 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/test-preload-20220725125155-44543/apiserver.key.c7fa3a9e -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/test-preload-20220725125155-44543/apiserver.key
	I0725 12:52:20.505102   53119 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/test-preload-20220725125155-44543/proxy-client.key
	I0725 12:52:20.505117   53119 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/test-preload-20220725125155-44543/proxy-client.crt with IP's: []
	I0725 12:52:20.771033   53119 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/test-preload-20220725125155-44543/proxy-client.crt ...
	I0725 12:52:20.771048   53119 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/test-preload-20220725125155-44543/proxy-client.crt: {Name:mkb48d52216e7a14159b9fa984b33eb1e2021f17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 12:52:20.771314   53119 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/test-preload-20220725125155-44543/proxy-client.key ...
	I0725 12:52:20.771322   53119 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/test-preload-20220725125155-44543/proxy-client.key: {Name:mk1e9c87b7f3a77f4df929f756718c6ed2f45c72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 12:52:20.771662   53119 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543.pem (1338 bytes)
	W0725 12:52:20.771699   53119 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543_empty.pem, impossibly tiny 0 bytes
	I0725 12:52:20.771707   53119 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 12:52:20.771734   53119 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem (1082 bytes)
	I0725 12:52:20.771762   53119 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem (1123 bytes)
	I0725 12:52:20.771788   53119 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem (1675 bytes)
	I0725 12:52:20.771848   53119 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem (1708 bytes)
	I0725 12:52:20.772274   53119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/test-preload-20220725125155-44543/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0725 12:52:20.796150   53119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/test-preload-20220725125155-44543/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 12:52:20.813001   53119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/test-preload-20220725125155-44543/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 12:52:20.829454   53119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/test-preload-20220725125155-44543/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0725 12:52:20.846315   53119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 12:52:20.871211   53119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 12:52:20.888528   53119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 12:52:20.908275   53119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0725 12:52:20.925455   53119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543.pem --> /usr/share/ca-certificates/44543.pem (1338 bytes)
	I0725 12:52:20.943706   53119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem --> /usr/share/ca-certificates/445432.pem (1708 bytes)
	I0725 12:52:20.960924   53119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 12:52:20.978607   53119 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 12:52:20.991923   53119 ssh_runner.go:195] Run: openssl version
	I0725 12:52:20.997301   53119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/445432.pem && ln -fs /usr/share/ca-certificates/445432.pem /etc/ssl/certs/445432.pem"
	I0725 12:52:21.005274   53119 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/445432.pem
	I0725 12:52:21.016864   53119 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 25 19:24 /usr/share/ca-certificates/445432.pem
	I0725 12:52:21.016912   53119 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/445432.pem
	I0725 12:52:21.022217   53119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/445432.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 12:52:21.030402   53119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 12:52:21.038740   53119 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 12:52:21.042719   53119 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 25 19:19 /usr/share/ca-certificates/minikubeCA.pem
	I0725 12:52:21.042756   53119 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 12:52:21.049827   53119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 12:52:21.059290   53119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44543.pem && ln -fs /usr/share/ca-certificates/44543.pem /etc/ssl/certs/44543.pem"
	I0725 12:52:21.067142   53119 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44543.pem
	I0725 12:52:21.071497   53119 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 25 19:24 /usr/share/ca-certificates/44543.pem
	I0725 12:52:21.071536   53119 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44543.pem
	I0725 12:52:21.080285   53119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/44543.pem /etc/ssl/certs/51391683.0"
	I0725 12:52:21.092702   53119 kubeadm.go:395] StartCluster: {Name:test-preload-20220725125155-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220725125155-44543 Namespace:default APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 12:52:21.092810   53119 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 12:52:21.121373   53119 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 12:52:21.132233   53119 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 12:52:21.145062   53119 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0725 12:52:21.145109   53119 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 12:52:21.153480   53119 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 12:52:21.153503   53119 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0725 12:52:21.868839   53119 out.go:204]   - Generating certificates and keys ...
	I0725 12:52:24.409392   53119 out.go:204]   - Booting up control plane ...
	W0725 12:54:19.321212   53119 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [test-preload-20220725125155-44543 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [test-preload-20220725125155-44543 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0725 19:52:21.213822    1573 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0725 19:52:21.213933    1573 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0725 19:52:24.400572    1573 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0725 19:52:24.401850    1573 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [test-preload-20220725125155-44543 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [test-preload-20220725125155-44543 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0725 19:52:21.213822    1573 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0725 19:52:21.213933    1573 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0725 19:52:24.400572    1573 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0725 19:52:24.401850    1573 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0725 12:54:19.321245   53119 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0725 12:54:19.745975   53119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 12:54:19.755411   53119 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0725 12:54:19.755459   53119 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 12:54:19.762328   53119 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 12:54:19.762353   53119 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0725 12:54:20.441821   53119 out.go:204]   - Generating certificates and keys ...
	I0725 12:54:20.896294   53119 out.go:204]   - Booting up control plane ...
	I0725 12:56:15.818358   53119 kubeadm.go:397] StartCluster complete in 3m54.720348319s
	I0725 12:56:15.818437   53119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 12:56:15.847399   53119 logs.go:274] 0 containers: []
	W0725 12:56:15.847411   53119 logs.go:276] No container was found matching "kube-apiserver"
	I0725 12:56:15.847466   53119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 12:56:15.875762   53119 logs.go:274] 0 containers: []
	W0725 12:56:15.875774   53119 logs.go:276] No container was found matching "etcd"
	I0725 12:56:15.875831   53119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 12:56:15.903984   53119 logs.go:274] 0 containers: []
	W0725 12:56:15.903996   53119 logs.go:276] No container was found matching "coredns"
	I0725 12:56:15.904052   53119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 12:56:15.933251   53119 logs.go:274] 0 containers: []
	W0725 12:56:15.933264   53119 logs.go:276] No container was found matching "kube-scheduler"
	I0725 12:56:15.933322   53119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 12:56:15.961796   53119 logs.go:274] 0 containers: []
	W0725 12:56:15.961811   53119 logs.go:276] No container was found matching "kube-proxy"
	I0725 12:56:15.961870   53119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 12:56:15.990721   53119 logs.go:274] 0 containers: []
	W0725 12:56:15.990735   53119 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 12:56:15.990793   53119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 12:56:16.020969   53119 logs.go:274] 0 containers: []
	W0725 12:56:16.020982   53119 logs.go:276] No container was found matching "storage-provisioner"
	I0725 12:56:16.021040   53119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 12:56:16.049888   53119 logs.go:274] 0 containers: []
	W0725 12:56:16.049901   53119 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 12:56:16.049907   53119 logs.go:123] Gathering logs for describe nodes ...
	I0725 12:56:16.049914   53119 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 12:56:16.103493   53119 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 12:56:16.103504   53119 logs.go:123] Gathering logs for Docker ...
	I0725 12:56:16.103512   53119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 12:56:16.118576   53119 logs.go:123] Gathering logs for container status ...
	I0725 12:56:16.118592   53119 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 12:56:18.176758   53119 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.058108621s)
	I0725 12:56:18.176906   53119 logs.go:123] Gathering logs for kubelet ...
	I0725 12:56:18.176913   53119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 12:56:18.216023   53119 logs.go:123] Gathering logs for dmesg ...
	I0725 12:56:18.216035   53119 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0725 12:56:18.229188   53119 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0725 19:54:19.822691    3843 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0725 19:54:19.822745    3843 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0725 19:54:20.895976    3843 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0725 19:54:20.896758    3843 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0725 12:56:18.229207   53119 out.go:239] * 
	* 
	W0725 12:56:18.229317   53119 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0725 19:54:19.822691    3843 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0725 19:54:19.822745    3843 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0725 19:54:20.895976    3843 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0725 19:54:20.896758    3843 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0725 19:54:19.822691    3843 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0725 19:54:19.822745    3843 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0725 19:54:20.895976    3843 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0725 19:54:20.896758    3843 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0725 12:56:18.229330   53119 out.go:239] * 
	* 
	W0725 12:56:18.229868   53119 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 12:56:18.293712   53119 out.go:177] 
	W0725 12:56:18.335483   53119 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0725 19:54:19.822691    3843 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0725 19:54:19.822745    3843 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0725 19:54:20.895976    3843 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0725 19:54:20.896758    3843 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0725 19:54:19.822691    3843 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0725 19:54:19.822745    3843 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0725 19:54:20.895976    3843 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0725 19:54:20.896758    3843 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0725 12:56:18.335552   53119 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0725 12:56:18.335594   53119 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0725 12:56:18.357495   53119 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:50: out/minikube-darwin-amd64 start -p test-preload-20220725125155-44543 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0 failed: exit status 109
panic.go:482: *** TestPreload FAILED at 2022-07-25 12:56:18.460196 -0700 PDT m=+2313.908742641
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPreload]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect test-preload-20220725125155-44543
helpers_test.go:235: (dbg) docker inspect test-preload-20220725125155-44543:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8f3b28af53982a343d67b6c74ce5f738da65eee9d83bb62dcf4cd61f6f6b8593",
	        "Created": "2022-07-25T19:51:57.349044774Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 110156,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-25T19:51:57.681769287Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/8f3b28af53982a343d67b6c74ce5f738da65eee9d83bb62dcf4cd61f6f6b8593/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8f3b28af53982a343d67b6c74ce5f738da65eee9d83bb62dcf4cd61f6f6b8593/hostname",
	        "HostsPath": "/var/lib/docker/containers/8f3b28af53982a343d67b6c74ce5f738da65eee9d83bb62dcf4cd61f6f6b8593/hosts",
	        "LogPath": "/var/lib/docker/containers/8f3b28af53982a343d67b6c74ce5f738da65eee9d83bb62dcf4cd61f6f6b8593/8f3b28af53982a343d67b6c74ce5f738da65eee9d83bb62dcf4cd61f6f6b8593-json.log",
	        "Name": "/test-preload-20220725125155-44543",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "test-preload-20220725125155-44543:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "test-preload-20220725125155-44543",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ba5568453081121b3237854b1a40dd7a6437a0e6986b1d02e676aa0e1a62d42b-init/diff:/var/lib/docker/overlay2/d54b85ebfe416686b2558e2807bd5eef62d8c3964c35bbc22b4965093a7c57f4/diff:/var/lib/docker/overlay2/5cb3ff6e7921802d23fd6cba2490df2c36857ad3d97eedcf6e537d1527a37f60/diff:/var/lib/docker/overlay2/e9074b1b2294cc5a00d3237646a71c4d30d89748eadbae26dd532d6a475c9851/diff:/var/lib/docker/overlay2/df8929c4859ecf2c8807040b0174a3f0dfa8da01532ea961cd7f4f2f0ccfbfc2/diff:/var/lib/docker/overlay2/177329168f9a1043ce94a2b7b32958779e8f685e85abf8e4c335d03d23295b26/diff:/var/lib/docker/overlay2/9eef19133d2d5e0111523234481658c1f73d7552db59842ad9ca687debda8fb3/diff:/var/lib/docker/overlay2/5c3a4cdf915d1e65e3003feefb9b9e649b432658e093e04971d5cb79e610dba9/diff:/var/lib/docker/overlay2/d1aeff9fecf983db994270b4acf2a2310c790c64d42c58d436aec3238f88b36c/diff:/var/lib/docker/overlay2/580dc19745fb3c1352983857ac5a555954893045702e7867e651e15a2d6d8732/diff:/var/lib/docker/overlay2/6c77b3
2028ad3ef74c2faf89b5080529c0391907ee9c1d7bbc00c25a5340238b/diff:/var/lib/docker/overlay2/e9e20374d05015c7a4f19eb8e14e663122cd2aef66d761bedfdef138551230b8/diff:/var/lib/docker/overlay2/cd6d64933d189089a7aaf08d7c7db50b89cc981d03b8272e4fdedb65495c3e4c/diff:/var/lib/docker/overlay2/768244a14ae888bb2a08747182035c24bd2a6a7a67f1ddae7b4e60efccb01339/diff:/var/lib/docker/overlay2/f90a95060678580a1a90ee9a563996249f7606bffcd06b680ef15234b56ab4a6/diff:/var/lib/docker/overlay2/49310159b9be5ac9bfa1dca18d816535ceb12e228963adafcf71f9d7d52d95ec/diff:/var/lib/docker/overlay2/cef5c40c08cddf01fe5ea252f4a28586c0bd1296ddc38fc37fc7e34af83c3c30/diff:/var/lib/docker/overlay2/b31bdcb9b1a69c5230400a4e1f5650c1cc8f9e27e673ff43708f9a450106733e/diff:/var/lib/docker/overlay2/899516301b3ffc21aee7cb9984f97d944fe5af0f2cd0bc5a5895bf187e4bff72/diff:/var/lib/docker/overlay2/f1bd0d2fc2ce458c0b6364dde56610c3d26bf8dc2cca2ae4e34a46f173f44595/diff:/var/lib/docker/overlay2/9f0da14f9c19d5a719554996b34f3ef80a0378921181aaf89ca41339ccc5bee3/diff:/var/lib/d
ocker/overlay2/4d6e8c40f7c65cbcfa827b62273eb37d2a0bb0407ee161b80523f11c157a52d4/diff:/var/lib/docker/overlay2/434355bebe182fb5c97b07ad8cf7a59b5474f8bc71379ff4f9690ffe31837e63/diff:/var/lib/docker/overlay2/128fb199261eb4fea9e79111861223ddb0d84438454cb7c0b9a61cc73d0796c3/diff:/var/lib/docker/overlay2/11c9cb7e5f15ac43115cb739da1135a53e08dc94ee1da27441e1d6c79eb73e62/diff:/var/lib/docker/overlay2/5e6bf225209b025da3db5a2d8862c3a0ee64417fa809d026e010644fc6619b48/diff:/var/lib/docker/overlay2/265a2d12f86abb8a607a06ec6f225186dfe622f7e23cbbf146a2ceffc7bc9bdc/diff:/var/lib/docker/overlay2/e9140b8aea381db0faf833cd2d12ff40267febe63f7c6bc2fa27384dbadf89bc/diff:/var/lib/docker/overlay2/07d7d0ab38cb08c95279e0706a65a6a4aeff8b5e72d559f8bc3dfcc0895654d6/diff:/var/lib/docker/overlay2/3a3d8346fb066863283ad39cf7f43615d7a1e99dc12aa7e8892d85d1c550bc63/diff:/var/lib/docker/overlay2/e5317b212815c832ebd2451ddd6e1f6922f350c5c1651ccfef7c5a8f34b7c1e3/diff:/var/lib/docker/overlay2/3bd7d98f0f0bf7ad2e29344a6ab4ca3211616e390da35077e76565c2074
df852/diff:/var/lib/docker/overlay2/ff63d2045cce1bb51b201079bba9d2b2a666b7331699b9407801b2fc00792bb3/diff:/var/lib/docker/overlay2/545bdf3f77f20e3db68875eda0a8829facb605119a630956ab79323688a3562a/diff:/var/lib/docker/overlay2/8f9b40124ec5ada59184358938542e028ca1e234ef1f6bce1816becf6ec8354e/diff:/var/lib/docker/overlay2/71b919cd2ece4207294cb577adab54fdba87cd9f7fdca1a53d6f376f87f11151/diff:/var/lib/docker/overlay2/b2d4a34fe28bd395d9b4ba0120173fc9df90c36a8a60a309ec088eca8930768c/diff:/var/lib/docker/overlay2/e986f180cbc34391c1ea6665283859b4c3c3061156ea1ff7196ffd869741e591/diff:/var/lib/docker/overlay2/cb98df203d42ea558f11fe84300b687cb65e6f3401b377a4466c9c8ef276ec59/diff:/var/lib/docker/overlay2/6371f9c0528fb1fb4a17bfcf3a34877fdd6d43dca1b5f06aff998d43d2010c46/diff:/var/lib/docker/overlay2/79ceef8e89c4094715c05bafb8c1427d09fb4a99f71f1a6186299303af352c98/diff:/var/lib/docker/overlay2/e034e76eeaff4e68d5158ffe54b55d122124ad82998be242fa08bec3010a0e64/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ba5568453081121b3237854b1a40dd7a6437a0e6986b1d02e676aa0e1a62d42b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ba5568453081121b3237854b1a40dd7a6437a0e6986b1d02e676aa0e1a62d42b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ba5568453081121b3237854b1a40dd7a6437a0e6986b1d02e676aa0e1a62d42b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "test-preload-20220725125155-44543",
	                "Source": "/var/lib/docker/volumes/test-preload-20220725125155-44543/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "test-preload-20220725125155-44543",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "test-preload-20220725125155-44543",
	                "name.minikube.sigs.k8s.io": "test-preload-20220725125155-44543",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "330e5ce7047852d4d09beffe491da392b7ce4a19661d9c09e219b9b1554aa4ab",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52746"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52747"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52743"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52744"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52745"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/330e5ce70478",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "test-preload-20220725125155-44543": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "8f3b28af5398",
	                        "test-preload-20220725125155-44543"
	                    ],
	                    "NetworkID": "d9736e9c06fbeaefdb4d333014147a627666c1ed19e93b94acc66effc790954b",
	                    "EndpointID": "8da20742bb5eb4a05bec0ed69560f499843a6bb6f473b572160642202b7d65dc",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p test-preload-20220725125155-44543 -n test-preload-20220725125155-44543
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p test-preload-20220725125155-44543 -n test-preload-20220725125155-44543: exit status 6 (426.879392ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 12:56:18.951022   53529 status.go:413] kubeconfig endpoint: extract IP: "test-preload-20220725125155-44543" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "test-preload-20220725125155-44543" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "test-preload-20220725125155-44543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-20220725125155-44543
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-20220725125155-44543: (2.527834155s)
--- FAIL: TestPreload (266.31s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (66.45s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.1610444717.exe start -p running-upgrade-20220725130124-44543 --memory=2200 --vm-driver=docker 
E0725 13:01:38.165694   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/functional-20220725122408-44543/client.crt: no such file or directory
E0725 13:01:44.436977   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/addons-20220725121918-44543/client.crt: no such file or directory
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.1610444717.exe start -p running-upgrade-20220725130124-44543 --memory=2200 --vm-driver=docker : exit status 70 (50.990767944s)

                                                
                                                
-- stdout --
	! [running-upgrade-20220725130124-44543] minikube v1.9.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14555
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig2378083508
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-07-25 20:01:58.095008091 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "running-upgrade-20220725130124-44543" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-07-25 20:02:14.390009238 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p running-upgrade-20220725130124-44543", then "minikube start -p running-upgrade-20220725130124-44543 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.26.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.26.0
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 15.06 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 37.14 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 56.23 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 71.97 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 87.12 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 101.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 119.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 142.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 163.97 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 185.86 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 202.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 219.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 233.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 248.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 261.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 272.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 284.75 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 296.38 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 310.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 325.67 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 339.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 352.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 364.52 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 376.59 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 387.86 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 397.89 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 402.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 407.86 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 414.59 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 422.62 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 432.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 446.77 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 461.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 474.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 488.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 502.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 519.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 532.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.t
ar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-07-25 20:02:14.390009238 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:127: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.1610444717.exe start -p running-upgrade-20220725130124-44543 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.1610444717.exe start -p running-upgrade-20220725130124-44543 --memory=2200 --vm-driver=docker : exit status 70 (4.732896611s)

                                                
                                                
-- stdout --
	* [running-upgrade-20220725130124-44543] minikube v1.9.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14555
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig2716601614
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-20220725130124-44543" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:127: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.1610444717.exe start -p running-upgrade-20220725130124-44543 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.1610444717.exe start -p running-upgrade-20220725130124-44543 --memory=2200 --vm-driver=docker : exit status 70 (4.476150238s)

                                                
                                                
-- stdout --
	* [running-upgrade-20220725130124-44543] minikube v1.9.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14555
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig2315347310
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-20220725130124-44543" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:133: legacy v1.9.0 start failed: exit status 70
panic.go:482: *** TestRunningBinaryUpgrade FAILED at 2022-07-25 13:02:28.497328 -0700 PDT m=+2683.937501195
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-20220725130124-44543
helpers_test.go:235: (dbg) docker inspect running-upgrade-20220725130124-44543:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "731ab36604448f8fad16dfc516c4027eb78feed6e124c7d2713fd228e8e61cfb",
	        "Created": "2022-07-25T20:02:06.331087268Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 145075,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-25T20:02:06.563066428Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/731ab36604448f8fad16dfc516c4027eb78feed6e124c7d2713fd228e8e61cfb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/731ab36604448f8fad16dfc516c4027eb78feed6e124c7d2713fd228e8e61cfb/hostname",
	        "HostsPath": "/var/lib/docker/containers/731ab36604448f8fad16dfc516c4027eb78feed6e124c7d2713fd228e8e61cfb/hosts",
	        "LogPath": "/var/lib/docker/containers/731ab36604448f8fad16dfc516c4027eb78feed6e124c7d2713fd228e8e61cfb/731ab36604448f8fad16dfc516c4027eb78feed6e124c7d2713fd228e8e61cfb-json.log",
	        "Name": "/running-upgrade-20220725130124-44543",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-20220725130124-44543:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bf00d80f76dda7590472fdc96b6b87bb822172d358ceffd64a1886760b8fca2b-init/diff:/var/lib/docker/overlay2/a93cd23c8b0acaf8fcced47c67e70f7a5e31c1e6373b4df9e8bfa8813efc5786/diff:/var/lib/docker/overlay2/0e92e1062a74cc932aab39fbb3ebc6ec8932495d723aa1f558dc1e7b61884cf1/diff:/var/lib/docker/overlay2/7f97644114605067dee71ba287cb6dcc6ac132c45670b05ab1096fba45f0f23d/diff:/var/lib/docker/overlay2/40d061e8d727c4270315a31be5d582b45d0850a15062f0c0c7351ca044ee11a4/diff:/var/lib/docker/overlay2/8c5a7d2c367dd2227945df13adc227447c4ca216af2f23656fde43dc2f11e791/diff:/var/lib/docker/overlay2/f047fec38b8395d9c86c11e004b410acaaae64afd90eb5bdc0b132ca680b1934/diff:/var/lib/docker/overlay2/f1557fb2a8002ba48e9c8f0879a0e068fa68ac48cb7f5016dfbda99fe0f32445/diff:/var/lib/docker/overlay2/0a755a82ef1cc3eb08218b3353f260c251a5dcdc32c2edd6e782a134e1efc430/diff:/var/lib/docker/overlay2/a3822357b1c58b7b72c0f0dfef6b77accae88438d2038cc08c7e76bb8293bff0/diff:/var/lib/docker/overlay2/e18fca
943054792455ca6262bb8a9150f288cac9a8e4141ba1410ca60dcccc98/diff:/var/lib/docker/overlay2/8ef76a523face9af34614e010e76ebb24c95553a918a349e5e67034e66136c01/diff:/var/lib/docker/overlay2/b15cb0911b9d281bba423abe79db5d01db85688003a73da86608923400e80389/diff:/var/lib/docker/overlay2/7c7d6e5ea166308d307eaf0bec66920f760683014e0277775dda7eab4ccec54b/diff:/var/lib/docker/overlay2/2f66f09e8477227550b4f4206baa8b06d51d7b4cac1f77ca77b73ba0a5f3fd74/diff:/var/lib/docker/overlay2/58e6a533380f3f5a7ea17a4ad175b53fba655e5eeb5d5dc645ce466b0c66a721/diff:/var/lib/docker/overlay2/f12e60ed4b2ddca167074f613e5f43ca27b1000a08831b029d91af2ccecb297c/diff:/var/lib/docker/overlay2/4b3655cac870ba13905b599d127c2cc5ca6890888cd04ad05c438c614ad66713/diff:/var/lib/docker/overlay2/527cd79c63316f8f8118fbb9a009fa7e1f6573946ee772dcc2560650d2303163/diff:/var/lib/docker/overlay2/a715635447391d038388216af45064ed801e1525d51b5797d94ff2d1473bf61c/diff:/var/lib/docker/overlay2/2f071347721a6fe87069b2a4d2964abd4df22ad1bd9e9df5165d8aad8e72c50f/diff:/var/lib/d
ocker/overlay2/f2a14fbca645cb1f4fdfb914ee9e6ec382777d2febaa55b6752fc402a52b6b47/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bf00d80f76dda7590472fdc96b6b87bb822172d358ceffd64a1886760b8fca2b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bf00d80f76dda7590472fdc96b6b87bb822172d358ceffd64a1886760b8fca2b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bf00d80f76dda7590472fdc96b6b87bb822172d358ceffd64a1886760b8fca2b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-20220725130124-44543",
	                "Source": "/var/lib/docker/volumes/running-upgrade-20220725130124-44543/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-20220725130124-44543",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-20220725130124-44543",
	                "name.minikube.sigs.k8s.io": "running-upgrade-20220725130124-44543",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f8c88134f0fe8cebf081acf3a8d38d4f452c008a23c627f347fc860aa74b1f81",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54604"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54605"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54606"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f8c88134f0fe",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "0797c94f6ba6ac24a3e2f4e72ca8c2b7e133f70d2ec3628d1342702b5ed7fec0",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "58f47bf284f1e7ae0089ef140c63bd0490c46f229b6b6bfbb4dd62a5554dfbc3",
	                    "EndpointID": "0797c94f6ba6ac24a3e2f4e72ca8c2b7e133f70d2ec3628d1342702b5ed7fec0",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-20220725130124-44543 -n running-upgrade-20220725130124-44543
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-20220725130124-44543 -n running-upgrade-20220725130124-44543: exit status 6 (445.155763ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 13:02:29.002387   55667 status.go:413] kubeconfig endpoint: extract IP: "running-upgrade-20220725130124-44543" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "running-upgrade-20220725130124-44543" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-20220725130124-44543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-20220725130124-44543
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-20220725130124-44543: (2.433459077s)
--- FAIL: TestRunningBinaryUpgrade (66.45s)

                                                
                                    
x
+
TestKubernetesUpgrade (576.12s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220725130322-44543 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker 
E0725 13:03:56.004329   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/skaffold-20220725125803-44543/client.crt: no such file or directory
E0725 13:03:56.010835   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/skaffold-20220725125803-44543/client.crt: no such file or directory
E0725 13:03:56.023063   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/skaffold-20220725125803-44543/client.crt: no such file or directory
E0725 13:03:56.043993   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/skaffold-20220725125803-44543/client.crt: no such file or directory
E0725 13:03:56.084303   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/skaffold-20220725125803-44543/client.crt: no such file or directory
E0725 13:03:56.164451   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/skaffold-20220725125803-44543/client.crt: no such file or directory
E0725 13:03:56.326354   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/skaffold-20220725125803-44543/client.crt: no such file or directory
E0725 13:03:56.646955   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/skaffold-20220725125803-44543/client.crt: no such file or directory
E0725 13:03:57.286482   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/skaffold-20220725125803-44543/client.crt: no such file or directory
E0725 13:03:58.566069   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/skaffold-20220725125803-44543/client.crt: no such file or directory
E0725 13:04:01.126175   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/skaffold-20220725125803-44543/client.crt: no such file or directory
E0725 13:04:06.243472   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/skaffold-20220725125803-44543/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220725130322-44543 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109 (4m14.845453202s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220725130322-44543] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14555
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node kubernetes-upgrade-20220725130322-44543 in cluster kubernetes-upgrade-20220725130322-44543
	* Pulling base image ...
	* Downloading Kubernetes v1.16.0 preload ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.17 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 13:03:22.383805   56035 out.go:296] Setting OutFile to fd 1 ...
	I0725 13:03:22.383981   56035 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 13:03:22.383986   56035 out.go:309] Setting ErrFile to fd 2...
	I0725 13:03:22.383990   56035 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 13:03:22.384095   56035 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/bin
	I0725 13:03:22.384616   56035 out.go:303] Setting JSON to false
	I0725 13:03:22.400409   56035 start.go:115] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":14574,"bootTime":1658764828,"procs":354,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0725 13:03:22.400584   56035 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0725 13:03:22.422761   56035 out.go:177] * [kubernetes-upgrade-20220725130322-44543] minikube v1.26.0 on Darwin 12.4
	I0725 13:03:22.465892   56035 notify.go:193] Checking for updates...
	I0725 13:03:22.487454   56035 out.go:177]   - MINIKUBE_LOCATION=14555
	I0725 13:03:22.561504   56035 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	I0725 13:03:22.582379   56035 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 13:03:22.603671   56035 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 13:03:22.625985   56035 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube
	I0725 13:03:22.648103   56035 config.go:178] Loaded profile config "cert-expiration-20220725130044-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0725 13:03:22.648193   56035 driver.go:365] Setting default libvirt URI to qemu:///system
	I0725 13:03:22.718923   56035 docker.go:137] docker version: linux-20.10.17
	I0725 13:03:22.719073   56035 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 13:03:22.853127   56035 info.go:265] docker info: {ID:DEPZ:X5EQ:JUVI:7TAY:IXPP:M3QT:KGJK:NK3N:YSWM:HAHS:YLUF:RBE6 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 20:03:22.794759291 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 13:03:22.882991   56035 out.go:177] * Using the docker driver based on user configuration
	I0725 13:03:22.920897   56035 start.go:284] selected driver: docker
	I0725 13:03:22.920924   56035 start.go:808] validating driver "docker" against <nil>
	I0725 13:03:22.920952   56035 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 13:03:22.923746   56035 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 13:03:23.056539   56035 info.go:265] docker info: {ID:DEPZ:X5EQ:JUVI:7TAY:IXPP:M3QT:KGJK:NK3N:YSWM:HAHS:YLUF:RBE6 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 20:03:22.999362644 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 13:03:23.056670   56035 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0725 13:03:23.056819   56035 start_flags.go:835] Wait components to verify : map[apiserver:true system_pods:true]
	I0725 13:03:23.078671   56035 out.go:177] * Using Docker Desktop driver with root privileges
	I0725 13:03:23.100486   56035 cni.go:95] Creating CNI manager for ""
	I0725 13:03:23.100521   56035 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 13:03:23.100535   56035 start_flags.go:310] config:
	{Name:kubernetes-upgrade-20220725130322-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220725130322-44543 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 13:03:23.122295   56035 out.go:177] * Starting control plane node kubernetes-upgrade-20220725130322-44543 in cluster kubernetes-upgrade-20220725130322-44543
	I0725 13:03:23.164563   56035 cache.go:120] Beginning downloading kic base image for docker with docker
	I0725 13:03:23.186423   56035 out.go:177] * Pulling base image ...
	I0725 13:03:23.228628   56035 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0725 13:03:23.228654   56035 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0725 13:03:23.293799   56035 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0725 13:03:23.293834   56035 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0725 13:03:23.300247   56035 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0725 13:03:23.300261   56035 cache.go:57] Caching tarball of preloaded images
	I0725 13:03:23.300522   56035 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0725 13:03:23.344316   56035 out.go:177] * Downloading Kubernetes v1.16.0 preload ...
	I0725 13:03:23.365486   56035 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0725 13:03:23.460938   56035 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0725 13:03:27.654083   56035 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0725 13:03:27.654244   56035 preload.go:256] verifying checksumm of /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0725 13:03:28.231138   56035 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0725 13:03:28.231221   56035 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543/config.json ...
	I0725 13:03:28.231242   56035 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543/config.json: {Name:mk21b4bb1380733cb606b16f27f30b33647f4c11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 13:03:28.231513   56035 cache.go:208] Successfully downloaded all kic artifacts
	I0725 13:03:28.231544   56035 start.go:370] acquiring machines lock for kubernetes-upgrade-20220725130322-44543: {Name:mk3e7763670f2855f6746ef40eb840a24b5302f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 13:03:28.231634   56035 start.go:374] acquired machines lock for "kubernetes-upgrade-20220725130322-44543" in 82.264µs
	I0725 13:03:28.231657   56035 start.go:92] Provisioning new machine with config: &{Name:kubernetes-upgrade-20220725130322-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-2022072513032
2-44543 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 13:03:28.231698   56035 start.go:132] createHost starting for "" (driver="docker")
	I0725 13:03:28.272772   56035 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0725 13:03:28.273023   56035 start.go:166] libmachine.API.Create for "kubernetes-upgrade-20220725130322-44543" (driver="docker")
	I0725 13:03:28.273046   56035 client.go:168] LocalClient.Create starting
	I0725 13:03:28.273117   56035 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem
	I0725 13:03:28.273151   56035 main.go:134] libmachine: Decoding PEM data...
	I0725 13:03:28.273164   56035 main.go:134] libmachine: Parsing certificate...
	I0725 13:03:28.273213   56035 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem
	I0725 13:03:28.273237   56035 main.go:134] libmachine: Decoding PEM data...
	I0725 13:03:28.273245   56035 main.go:134] libmachine: Parsing certificate...
	I0725 13:03:28.273654   56035 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220725130322-44543 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0725 13:03:28.338102   56035 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220725130322-44543 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0725 13:03:28.338202   56035 network_create.go:272] running [docker network inspect kubernetes-upgrade-20220725130322-44543] to gather additional debugging logs...
	I0725 13:03:28.338223   56035 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220725130322-44543
	W0725 13:03:28.401334   56035 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220725130322-44543 returned with exit code 1
	I0725 13:03:28.401372   56035 network_create.go:275] error running [docker network inspect kubernetes-upgrade-20220725130322-44543]: docker network inspect kubernetes-upgrade-20220725130322-44543: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-20220725130322-44543
	I0725 13:03:28.401404   56035 network_create.go:277] output of [docker network inspect kubernetes-upgrade-20220725130322-44543]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-20220725130322-44543
	
	** /stderr **
	I0725 13:03:28.401484   56035 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0725 13:03:28.465722   56035 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00000e9d0] misses:0}
	I0725 13:03:28.465767   56035 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0725 13:03:28.465783   56035 network_create.go:115] attempt to create docker network kubernetes-upgrade-20220725130322-44543 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0725 13:03:28.465878   56035 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220725130322-44543 kubernetes-upgrade-20220725130322-44543
	W0725 13:03:28.528605   56035 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220725130322-44543 kubernetes-upgrade-20220725130322-44543 returned with exit code 1
	W0725 13:03:28.528656   56035 network_create.go:107] failed to create docker network kubernetes-upgrade-20220725130322-44543 192.168.49.0/24, will retry: subnet is taken
	I0725 13:03:28.528998   56035 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00000e9d0] amended:false}} dirty:map[] misses:0}
	I0725 13:03:28.529014   56035 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0725 13:03:28.529206   56035 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00000e9d0] amended:true}} dirty:map[192.168.49.0:0xc00000e9d0 192.168.58.0:0xc000618298] misses:0}
	I0725 13:03:28.529222   56035 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0725 13:03:28.529229   56035 network_create.go:115] attempt to create docker network kubernetes-upgrade-20220725130322-44543 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0725 13:03:28.529286   56035 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220725130322-44543 kubernetes-upgrade-20220725130322-44543
	W0725 13:03:28.591239   56035 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220725130322-44543 kubernetes-upgrade-20220725130322-44543 returned with exit code 1
	W0725 13:03:28.591294   56035 network_create.go:107] failed to create docker network kubernetes-upgrade-20220725130322-44543 192.168.58.0/24, will retry: subnet is taken
	I0725 13:03:28.591578   56035 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00000e9d0] amended:true}} dirty:map[192.168.49.0:0xc00000e9d0 192.168.58.0:0xc000618298] misses:1}
	I0725 13:03:28.591610   56035 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0725 13:03:28.591818   56035 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00000e9d0] amended:true}} dirty:map[192.168.49.0:0xc00000e9d0 192.168.58.0:0xc000618298 192.168.67.0:0xc00000ea18] misses:1}
	I0725 13:03:28.591831   56035 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0725 13:03:28.591840   56035 network_create.go:115] attempt to create docker network kubernetes-upgrade-20220725130322-44543 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0725 13:03:28.591910   56035 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220725130322-44543 kubernetes-upgrade-20220725130322-44543
	W0725 13:03:28.653877   56035 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220725130322-44543 kubernetes-upgrade-20220725130322-44543 returned with exit code 1
	W0725 13:03:28.653919   56035 network_create.go:107] failed to create docker network kubernetes-upgrade-20220725130322-44543 192.168.67.0/24, will retry: subnet is taken
	I0725 13:03:28.654203   56035 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00000e9d0] amended:true}} dirty:map[192.168.49.0:0xc00000e9d0 192.168.58.0:0xc000618298 192.168.67.0:0xc00000ea18] misses:2}
	I0725 13:03:28.654220   56035 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0725 13:03:28.654420   56035 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00000e9d0] amended:true}} dirty:map[192.168.49.0:0xc00000e9d0 192.168.58.0:0xc000618298 192.168.67.0:0xc00000ea18 192.168.76.0:0xc000618340] misses:2}
	I0725 13:03:28.654433   56035 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0725 13:03:28.654439   56035 network_create.go:115] attempt to create docker network kubernetes-upgrade-20220725130322-44543 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0725 13:03:28.654499   56035 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220725130322-44543 kubernetes-upgrade-20220725130322-44543
	I0725 13:03:28.749782   56035 network_create.go:99] docker network kubernetes-upgrade-20220725130322-44543 192.168.76.0/24 created
	I0725 13:03:28.749816   56035 kic.go:106] calculated static IP "192.168.76.2" for the "kubernetes-upgrade-20220725130322-44543" container
	I0725 13:03:28.749907   56035 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0725 13:03:28.816698   56035 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-20220725130322-44543 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220725130322-44543 --label created_by.minikube.sigs.k8s.io=true
	I0725 13:03:28.879012   56035 oci.go:103] Successfully created a docker volume kubernetes-upgrade-20220725130322-44543
	I0725 13:03:28.879139   56035 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-20220725130322-44543-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220725130322-44543 --entrypoint /usr/bin/test -v kubernetes-upgrade-20220725130322-44543:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 -d /var/lib
	I0725 13:03:29.346893   56035 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-20220725130322-44543
	I0725 13:03:29.347054   56035 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0725 13:03:29.347069   56035 kic.go:179] Starting extracting preloaded images to volume ...
	I0725 13:03:29.347161   56035 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-20220725130322-44543:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 -I lz4 -xf /preloaded.tar -C /extractDir
	I0725 13:03:33.130249   56035 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-20220725130322-44543:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 -I lz4 -xf /preloaded.tar -C /extractDir: (3.782921446s)
	I0725 13:03:33.130269   56035 kic.go:188] duration metric: took 3.783114 seconds to extract preloaded images to volume
	I0725 13:03:33.130393   56035 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0725 13:03:33.263482   56035 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-20220725130322-44543 --name kubernetes-upgrade-20220725130322-44543 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220725130322-44543 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-20220725130322-44543 --network kubernetes-upgrade-20220725130322-44543 --ip 192.168.76.2 --volume kubernetes-upgrade-20220725130322-44543:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842
	I0725 13:03:33.637834   56035 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220725130322-44543 --format={{.State.Running}}
	I0725 13:03:33.710991   56035 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220725130322-44543 --format={{.State.Status}}
	I0725 13:03:33.791426   56035 cli_runner.go:164] Run: docker exec kubernetes-upgrade-20220725130322-44543 stat /var/lib/dpkg/alternatives/iptables
	I0725 13:03:33.909647   56035 oci.go:144] the created container "kubernetes-upgrade-20220725130322-44543" has a running status.
	I0725 13:03:33.909675   56035 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/kubernetes-upgrade-20220725130322-44543/id_rsa...
	I0725 13:03:34.203294   56035 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/kubernetes-upgrade-20220725130322-44543/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0725 13:03:34.319471   56035 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220725130322-44543 --format={{.State.Status}}
	I0725 13:03:34.392642   56035 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0725 13:03:34.392661   56035 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-20220725130322-44543 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0725 13:03:34.520336   56035 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220725130322-44543 --format={{.State.Status}}
	I0725 13:03:34.593713   56035 machine.go:88] provisioning docker machine ...
	I0725 13:03:34.593778   56035 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-20220725130322-44543"
	I0725 13:03:34.593887   56035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725130322-44543
	I0725 13:03:34.667849   56035 main.go:134] libmachine: Using SSH client type: native
	I0725 13:03:34.668058   56035 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 55096 <nil> <nil>}
	I0725 13:03:34.668071   56035 main.go:134] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-20220725130322-44543 && echo "kubernetes-upgrade-20220725130322-44543" | sudo tee /etc/hostname
	I0725 13:03:34.794484   56035 main.go:134] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-20220725130322-44543
	
	I0725 13:03:34.794584   56035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725130322-44543
	I0725 13:03:34.866092   56035 main.go:134] libmachine: Using SSH client type: native
	I0725 13:03:34.866269   56035 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 55096 <nil> <nil>}
	I0725 13:03:34.866285   56035 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-20220725130322-44543' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-20220725130322-44543/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-20220725130322-44543' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 13:03:34.989208   56035 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 13:03:34.989237   56035 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube}
	I0725 13:03:34.989256   56035 ubuntu.go:177] setting up certificates
	I0725 13:03:34.989267   56035 provision.go:83] configureAuth start
	I0725 13:03:34.989335   56035 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220725130322-44543
	I0725 13:03:35.062241   56035 provision.go:138] copyHostCerts
	I0725 13:03:35.062317   56035 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem, removing ...
	I0725 13:03:35.062324   56035 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem
	I0725 13:03:35.062431   56035 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem (1082 bytes)
	I0725 13:03:35.062610   56035 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem, removing ...
	I0725 13:03:35.062619   56035 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem
	I0725 13:03:35.062681   56035 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem (1123 bytes)
	I0725 13:03:35.062819   56035 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem, removing ...
	I0725 13:03:35.062825   56035 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem
	I0725 13:03:35.062883   56035 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem (1675 bytes)
	I0725 13:03:35.063004   56035 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-20220725130322-44543 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-20220725130322-44543]
	I0725 13:03:35.416431   56035 provision.go:172] copyRemoteCerts
	I0725 13:03:35.416484   56035 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 13:03:35.416534   56035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725130322-44543
	I0725 13:03:35.488341   56035 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/kubernetes-upgrade-20220725130322-44543/id_rsa Username:docker}
	I0725 13:03:35.576602   56035 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem --> /etc/docker/server.pem (1289 bytes)
	I0725 13:03:35.593505   56035 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0725 13:03:35.610179   56035 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0725 13:03:35.627527   56035 provision.go:86] duration metric: configureAuth took 638.234076ms
	I0725 13:03:35.627543   56035 ubuntu.go:193] setting minikube options for container-runtime
	I0725 13:03:35.627672   56035 config.go:178] Loaded profile config "kubernetes-upgrade-20220725130322-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0725 13:03:35.627725   56035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725130322-44543
	I0725 13:03:35.699806   56035 main.go:134] libmachine: Using SSH client type: native
	I0725 13:03:35.699971   56035 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 55096 <nil> <nil>}
	I0725 13:03:35.699994   56035 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 13:03:35.822859   56035 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0725 13:03:35.822872   56035 ubuntu.go:71] root file system type: overlay
	I0725 13:03:35.823054   56035 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 13:03:35.823130   56035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725130322-44543
	I0725 13:03:35.894952   56035 main.go:134] libmachine: Using SSH client type: native
	I0725 13:03:35.895182   56035 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 55096 <nil> <nil>}
	I0725 13:03:35.895228   56035 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 13:03:36.022663   56035 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 13:03:36.022749   56035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725130322-44543
	I0725 13:03:36.093316   56035 main.go:134] libmachine: Using SSH client type: native
	I0725 13:03:36.093494   56035 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 55096 <nil> <nil>}
	I0725 13:03:36.093507   56035 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 13:03:36.668774   56035 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-06-06 23:01:03.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-07-25 20:03:36.026670653 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0725 13:03:36.668794   56035 machine.go:91] provisioned docker machine in 2.075016379s
	I0725 13:03:36.668800   56035 client.go:171] LocalClient.Create took 8.395559465s
	I0725 13:03:36.668819   56035 start.go:174] duration metric: libmachine.API.Create for "kubernetes-upgrade-20220725130322-44543" took 8.395600643s
	I0725 13:03:36.668829   56035 start.go:307] post-start starting for "kubernetes-upgrade-20220725130322-44543" (driver="docker")
	I0725 13:03:36.668834   56035 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 13:03:36.668890   56035 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 13:03:36.668932   56035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725130322-44543
	I0725 13:03:36.743209   56035 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/kubernetes-upgrade-20220725130322-44543/id_rsa Username:docker}
	I0725 13:03:36.832152   56035 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 13:03:36.835666   56035 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0725 13:03:36.835683   56035 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0725 13:03:36.835690   56035 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0725 13:03:36.835698   56035 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0725 13:03:36.835708   56035 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/addons for local assets ...
	I0725 13:03:36.835819   56035 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files for local assets ...
	I0725 13:03:36.835975   56035 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem -> 445432.pem in /etc/ssl/certs
	I0725 13:03:36.836130   56035 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 13:03:36.842841   56035 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem --> /etc/ssl/certs/445432.pem (1708 bytes)
	I0725 13:03:36.859693   56035 start.go:310] post-start completed in 190.851698ms
	I0725 13:03:36.860222   56035 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220725130322-44543
	I0725 13:03:36.931676   56035 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543/config.json ...
	I0725 13:03:36.932063   56035 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 13:03:36.932109   56035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725130322-44543
	I0725 13:03:37.006264   56035 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/kubernetes-upgrade-20220725130322-44543/id_rsa Username:docker}
	I0725 13:03:37.089983   56035 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0725 13:03:37.094469   56035 start.go:135] duration metric: createHost completed in 8.862564317s
	I0725 13:03:37.094483   56035 start.go:82] releasing machines lock for "kubernetes-upgrade-20220725130322-44543", held for 8.862641048s
	I0725 13:03:37.094566   56035 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220725130322-44543
	I0725 13:03:37.167493   56035 ssh_runner.go:195] Run: systemctl --version
	I0725 13:03:37.167493   56035 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0725 13:03:37.167565   56035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725130322-44543
	I0725 13:03:37.167609   56035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725130322-44543
	I0725 13:03:37.244503   56035 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/kubernetes-upgrade-20220725130322-44543/id_rsa Username:docker}
	I0725 13:03:37.247475   56035 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/kubernetes-upgrade-20220725130322-44543/id_rsa Username:docker}
	I0725 13:03:37.461851   56035 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 13:03:37.471718   56035 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0725 13:03:37.471775   56035 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 13:03:37.480378   56035 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 13:03:37.492886   56035 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 13:03:37.561697   56035 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 13:03:37.633775   56035 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 13:03:37.705671   56035 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 13:03:37.909064   56035 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 13:03:37.943885   56035 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 13:03:38.002418   56035 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.17 ...
	I0725 13:03:38.002519   56035 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-20220725130322-44543 dig +short host.docker.internal
	I0725 13:03:38.137435   56035 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0725 13:03:38.137552   56035 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0725 13:03:38.141603   56035 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 13:03:38.151108   56035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725130322-44543
	I0725 13:03:38.224723   56035 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0725 13:03:38.224863   56035 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 13:03:38.255469   56035 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0725 13:03:38.255483   56035 docker.go:542] Images already preloaded, skipping extraction
	I0725 13:03:38.255545   56035 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 13:03:38.284485   56035 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0725 13:03:38.284507   56035 cache_images.go:84] Images are preloaded, skipping loading
	I0725 13:03:38.284575   56035 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0725 13:03:38.354001   56035 cni.go:95] Creating CNI manager for ""
	I0725 13:03:38.354013   56035 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 13:03:38.354027   56035 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0725 13:03:38.354052   56035 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-20220725130322-44543 NodeName:kubernetes-upgrade-20220725130322-44543 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:systemd
ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0725 13:03:38.354175   56035 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-20220725130322-44543"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: kubernetes-upgrade-20220725130322-44543
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 13:03:38.354267   56035 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-20220725130322-44543 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220725130322-44543 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0725 13:03:38.354327   56035 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0725 13:03:38.361645   56035 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 13:03:38.361697   56035 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 13:03:38.368440   56035 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (365 bytes)
	I0725 13:03:38.385967   56035 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 13:03:38.398898   56035 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0725 13:03:38.411240   56035 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0725 13:03:38.414752   56035 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 13:03:38.423774   56035 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543 for IP: 192.168.76.2
	I0725 13:03:38.423885   56035 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.key
	I0725 13:03:38.423934   56035 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.key
	I0725 13:03:38.423973   56035 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543/client.key
	I0725 13:03:38.423987   56035 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543/client.crt with IP's: []
	I0725 13:03:38.577179   56035 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543/client.crt ...
	I0725 13:03:38.577191   56035 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543/client.crt: {Name:mk1905e49325d870970ba7dff84ab1486c9d0059 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 13:03:38.577495   56035 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543/client.key ...
	I0725 13:03:38.577507   56035 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543/client.key: {Name:mk68f7bc5805669fe340177bbb2174e4a5be5843 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 13:03:38.577700   56035 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543/apiserver.key.31bdca25
	I0725 13:03:38.577715   56035 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0725 13:03:38.744127   56035 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543/apiserver.crt.31bdca25 ...
	I0725 13:03:38.744140   56035 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543/apiserver.crt.31bdca25: {Name:mk98aec01b6178610007c91b5d6250a46cd1f902 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 13:03:38.744382   56035 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543/apiserver.key.31bdca25 ...
	I0725 13:03:38.744393   56035 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543/apiserver.key.31bdca25: {Name:mk0636210e69127f4a74475a053a244f782da6a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 13:03:38.744577   56035 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543/apiserver.crt.31bdca25 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543/apiserver.crt
	I0725 13:03:38.744729   56035 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543/apiserver.key.31bdca25 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543/apiserver.key
	I0725 13:03:38.744884   56035 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543/proxy-client.key
	I0725 13:03:38.744898   56035 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543/proxy-client.crt with IP's: []
	I0725 13:03:39.045443   56035 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543/proxy-client.crt ...
	I0725 13:03:39.045458   56035 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543/proxy-client.crt: {Name:mk6416d3c0e92d170313de359f57ebb0dfd3aad7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 13:03:39.045718   56035 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543/proxy-client.key ...
	I0725 13:03:39.045726   56035 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543/proxy-client.key: {Name:mk214b1e0e8cf046c0f3b66b0a774e5126729e30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 13:03:39.046125   56035 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543.pem (1338 bytes)
	W0725 13:03:39.046168   56035 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543_empty.pem, impossibly tiny 0 bytes
	I0725 13:03:39.046180   56035 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 13:03:39.046229   56035 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem (1082 bytes)
	I0725 13:03:39.046262   56035 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem (1123 bytes)
	I0725 13:03:39.046288   56035 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem (1675 bytes)
	I0725 13:03:39.046357   56035 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem (1708 bytes)
	I0725 13:03:39.046921   56035 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0725 13:03:39.064512   56035 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 13:03:39.081129   56035 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 13:03:39.097843   56035 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0725 13:03:39.117176   56035 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 13:03:39.134085   56035 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 13:03:39.150845   56035 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 13:03:39.167406   56035 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0725 13:03:39.183725   56035 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 13:03:39.201555   56035 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543.pem --> /usr/share/ca-certificates/44543.pem (1338 bytes)
	I0725 13:03:39.218399   56035 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem --> /usr/share/ca-certificates/445432.pem (1708 bytes)
	I0725 13:03:39.234848   56035 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 13:03:39.246878   56035 ssh_runner.go:195] Run: openssl version
	I0725 13:03:39.252966   56035 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 13:03:39.260693   56035 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 13:03:39.264621   56035 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 25 19:19 /usr/share/ca-certificates/minikubeCA.pem
	I0725 13:03:39.264678   56035 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 13:03:39.271448   56035 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 13:03:39.282441   56035 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44543.pem && ln -fs /usr/share/ca-certificates/44543.pem /etc/ssl/certs/44543.pem"
	I0725 13:03:39.294296   56035 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44543.pem
	I0725 13:03:39.299640   56035 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 25 19:24 /usr/share/ca-certificates/44543.pem
	I0725 13:03:39.299703   56035 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44543.pem
	I0725 13:03:39.306510   56035 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/44543.pem /etc/ssl/certs/51391683.0"
	I0725 13:03:39.315251   56035 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/445432.pem && ln -fs /usr/share/ca-certificates/445432.pem /etc/ssl/certs/445432.pem"
	I0725 13:03:39.324573   56035 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/445432.pem
	I0725 13:03:39.328388   56035 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 25 19:24 /usr/share/ca-certificates/445432.pem
	I0725 13:03:39.328452   56035 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/445432.pem
	I0725 13:03:39.334237   56035 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/445432.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 13:03:39.341898   56035 kubeadm.go:395] StartCluster: {Name:kubernetes-upgrade-20220725130322-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220725130322-44543 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
}
	I0725 13:03:39.341990   56035 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 13:03:39.370192   56035 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 13:03:39.378119   56035 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 13:03:39.385443   56035 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0725 13:03:39.385492   56035 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 13:03:39.392568   56035 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 13:03:39.392592   56035 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0725 13:03:40.126764   56035 out.go:204]   - Generating certificates and keys ...
	I0725 13:03:42.639711   56035 out.go:204]   - Booting up control plane ...
	W0725 13:05:37.526685   56035 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-20220725130322-44543 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-20220725130322-44543 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-20220725130322-44543 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-20220725130322-44543 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0725 13:05:37.526723   56035 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0725 13:05:37.949159   56035 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 13:05:37.959238   56035 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0725 13:05:37.959296   56035 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 13:05:37.967057   56035 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 13:05:37.967082   56035 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0725 13:05:38.728666   56035 out.go:204]   - Generating certificates and keys ...
	I0725 13:05:39.584162   56035 out.go:204]   - Booting up control plane ...
	I0725 13:07:34.500089   56035 kubeadm.go:397] StartCluster complete in 3m55.181137684s
	I0725 13:07:34.500167   56035 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:07:34.528253   56035 logs.go:274] 0 containers: []
	W0725 13:07:34.528266   56035 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:07:34.528320   56035 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:07:34.559734   56035 logs.go:274] 0 containers: []
	W0725 13:07:34.559750   56035 logs.go:276] No container was found matching "etcd"
	I0725 13:07:34.559814   56035 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:07:34.591683   56035 logs.go:274] 0 containers: []
	W0725 13:07:34.591696   56035 logs.go:276] No container was found matching "coredns"
	I0725 13:07:34.591749   56035 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:07:34.624335   56035 logs.go:274] 0 containers: []
	W0725 13:07:34.624348   56035 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:07:34.624428   56035 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:07:34.654256   56035 logs.go:274] 0 containers: []
	W0725 13:07:34.654269   56035 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:07:34.654325   56035 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:07:34.683176   56035 logs.go:274] 0 containers: []
	W0725 13:07:34.683190   56035 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:07:34.683253   56035 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:07:34.711503   56035 logs.go:274] 0 containers: []
	W0725 13:07:34.711516   56035 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:07:34.711580   56035 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:07:34.740232   56035 logs.go:274] 0 containers: []
	W0725 13:07:34.740245   56035 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:07:34.740253   56035 logs.go:123] Gathering logs for Docker ...
	I0725 13:07:34.740260   56035 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:07:34.756150   56035 logs.go:123] Gathering logs for container status ...
	I0725 13:07:34.756161   56035 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:07:36.810521   56035 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054307461s)
	I0725 13:07:36.810654   56035 logs.go:123] Gathering logs for kubelet ...
	I0725 13:07:36.810663   56035 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:07:36.851609   56035 logs.go:123] Gathering logs for dmesg ...
	I0725 13:07:36.851622   56035 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:07:36.865179   56035 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:07:36.865195   56035 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:07:36.916774   56035 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0725 13:07:36.916793   56035 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0725 13:07:36.916807   56035 out.go:239] * 
	* 
	W0725 13:07:36.916920   56035 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0725 13:07:36.916934   56035 out.go:239] * 
	* 
	W0725 13:07:36.917615   56035 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 13:07:36.982433   56035 out.go:177] 
	W0725 13:07:37.025483   56035 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0725 13:07:37.025640   56035 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0725 13:07:37.025759   56035 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0725 13:07:37.089017   56035 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:231: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220725130322-44543 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109
version_upgrade_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-20220725130322-44543
version_upgrade_test.go:234: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-20220725130322-44543: (1.645772448s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-20220725130322-44543 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-20220725130322-44543 status --format={{.Host}}: exit status 7 (117.552665ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220725130322-44543 --memory=2200 --kubernetes-version=v1.24.2 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220725130322-44543 --memory=2200 --kubernetes-version=v1.24.2 --alsologtostderr -v=1 --driver=docker : (4m35.888391404s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-20220725130322-44543 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220725130322-44543 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker 
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220725130322-44543 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker : exit status 106 (528.875498ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220725130322-44543] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14555
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.24.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20220725130322-44543
	    minikube start -p kubernetes-upgrade-20220725130322-44543 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220725130322-445432 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.24.2, by running:
	    
	    minikube start -p kubernetes-upgrade-20220725130322-44543 --kubernetes-version=v1.24.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220725130322-44543 --memory=2200 --kubernetes-version=v1.24.2 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:282: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220725130322-44543 --memory=2200 --kubernetes-version=v1.24.2 --alsologtostderr -v=1 --driver=docker : (33.102922711s)
version_upgrade_test.go:286: *** TestKubernetesUpgrade FAILED at 2022-07-25 13:12:48.57873 -0700 PDT m=+3304.034005857
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-20220725130322-44543
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-20220725130322-44543:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4eac9f19ddf0e78d2f64783a35379c025b5f6568963499d3052b07de3ecac438",
	        "Created": "2022-07-25T20:03:33.330912264Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 166431,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-25T20:07:40.274015371Z",
	            "FinishedAt": "2022-07-25T20:07:37.662409501Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/4eac9f19ddf0e78d2f64783a35379c025b5f6568963499d3052b07de3ecac438/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4eac9f19ddf0e78d2f64783a35379c025b5f6568963499d3052b07de3ecac438/hostname",
	        "HostsPath": "/var/lib/docker/containers/4eac9f19ddf0e78d2f64783a35379c025b5f6568963499d3052b07de3ecac438/hosts",
	        "LogPath": "/var/lib/docker/containers/4eac9f19ddf0e78d2f64783a35379c025b5f6568963499d3052b07de3ecac438/4eac9f19ddf0e78d2f64783a35379c025b5f6568963499d3052b07de3ecac438-json.log",
	        "Name": "/kubernetes-upgrade-20220725130322-44543",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-20220725130322-44543:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-20220725130322-44543",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/820efae240ead1a66f2e9defebf9cf2e014dd0f2b2f2ee305eaeb3dc4a607853-init/diff:/var/lib/docker/overlay2/d54b85ebfe416686b2558e2807bd5eef62d8c3964c35bbc22b4965093a7c57f4/diff:/var/lib/docker/overlay2/5cb3ff6e7921802d23fd6cba2490df2c36857ad3d97eedcf6e537d1527a37f60/diff:/var/lib/docker/overlay2/e9074b1b2294cc5a00d3237646a71c4d30d89748eadbae26dd532d6a475c9851/diff:/var/lib/docker/overlay2/df8929c4859ecf2c8807040b0174a3f0dfa8da01532ea961cd7f4f2f0ccfbfc2/diff:/var/lib/docker/overlay2/177329168f9a1043ce94a2b7b32958779e8f685e85abf8e4c335d03d23295b26/diff:/var/lib/docker/overlay2/9eef19133d2d5e0111523234481658c1f73d7552db59842ad9ca687debda8fb3/diff:/var/lib/docker/overlay2/5c3a4cdf915d1e65e3003feefb9b9e649b432658e093e04971d5cb79e610dba9/diff:/var/lib/docker/overlay2/d1aeff9fecf983db994270b4acf2a2310c790c64d42c58d436aec3238f88b36c/diff:/var/lib/docker/overlay2/580dc19745fb3c1352983857ac5a555954893045702e7867e651e15a2d6d8732/diff:/var/lib/docker/overlay2/6c77b3
2028ad3ef74c2faf89b5080529c0391907ee9c1d7bbc00c25a5340238b/diff:/var/lib/docker/overlay2/e9e20374d05015c7a4f19eb8e14e663122cd2aef66d761bedfdef138551230b8/diff:/var/lib/docker/overlay2/cd6d64933d189089a7aaf08d7c7db50b89cc981d03b8272e4fdedb65495c3e4c/diff:/var/lib/docker/overlay2/768244a14ae888bb2a08747182035c24bd2a6a7a67f1ddae7b4e60efccb01339/diff:/var/lib/docker/overlay2/f90a95060678580a1a90ee9a563996249f7606bffcd06b680ef15234b56ab4a6/diff:/var/lib/docker/overlay2/49310159b9be5ac9bfa1dca18d816535ceb12e228963adafcf71f9d7d52d95ec/diff:/var/lib/docker/overlay2/cef5c40c08cddf01fe5ea252f4a28586c0bd1296ddc38fc37fc7e34af83c3c30/diff:/var/lib/docker/overlay2/b31bdcb9b1a69c5230400a4e1f5650c1cc8f9e27e673ff43708f9a450106733e/diff:/var/lib/docker/overlay2/899516301b3ffc21aee7cb9984f97d944fe5af0f2cd0bc5a5895bf187e4bff72/diff:/var/lib/docker/overlay2/f1bd0d2fc2ce458c0b6364dde56610c3d26bf8dc2cca2ae4e34a46f173f44595/diff:/var/lib/docker/overlay2/9f0da14f9c19d5a719554996b34f3ef80a0378921181aaf89ca41339ccc5bee3/diff:/var/lib/d
ocker/overlay2/4d6e8c40f7c65cbcfa827b62273eb37d2a0bb0407ee161b80523f11c157a52d4/diff:/var/lib/docker/overlay2/434355bebe182fb5c97b07ad8cf7a59b5474f8bc71379ff4f9690ffe31837e63/diff:/var/lib/docker/overlay2/128fb199261eb4fea9e79111861223ddb0d84438454cb7c0b9a61cc73d0796c3/diff:/var/lib/docker/overlay2/11c9cb7e5f15ac43115cb739da1135a53e08dc94ee1da27441e1d6c79eb73e62/diff:/var/lib/docker/overlay2/5e6bf225209b025da3db5a2d8862c3a0ee64417fa809d026e010644fc6619b48/diff:/var/lib/docker/overlay2/265a2d12f86abb8a607a06ec6f225186dfe622f7e23cbbf146a2ceffc7bc9bdc/diff:/var/lib/docker/overlay2/e9140b8aea381db0faf833cd2d12ff40267febe63f7c6bc2fa27384dbadf89bc/diff:/var/lib/docker/overlay2/07d7d0ab38cb08c95279e0706a65a6a4aeff8b5e72d559f8bc3dfcc0895654d6/diff:/var/lib/docker/overlay2/3a3d8346fb066863283ad39cf7f43615d7a1e99dc12aa7e8892d85d1c550bc63/diff:/var/lib/docker/overlay2/e5317b212815c832ebd2451ddd6e1f6922f350c5c1651ccfef7c5a8f34b7c1e3/diff:/var/lib/docker/overlay2/3bd7d98f0f0bf7ad2e29344a6ab4ca3211616e390da35077e76565c2074
df852/diff:/var/lib/docker/overlay2/ff63d2045cce1bb51b201079bba9d2b2a666b7331699b9407801b2fc00792bb3/diff:/var/lib/docker/overlay2/545bdf3f77f20e3db68875eda0a8829facb605119a630956ab79323688a3562a/diff:/var/lib/docker/overlay2/8f9b40124ec5ada59184358938542e028ca1e234ef1f6bce1816becf6ec8354e/diff:/var/lib/docker/overlay2/71b919cd2ece4207294cb577adab54fdba87cd9f7fdca1a53d6f376f87f11151/diff:/var/lib/docker/overlay2/b2d4a34fe28bd395d9b4ba0120173fc9df90c36a8a60a309ec088eca8930768c/diff:/var/lib/docker/overlay2/e986f180cbc34391c1ea6665283859b4c3c3061156ea1ff7196ffd869741e591/diff:/var/lib/docker/overlay2/cb98df203d42ea558f11fe84300b687cb65e6f3401b377a4466c9c8ef276ec59/diff:/var/lib/docker/overlay2/6371f9c0528fb1fb4a17bfcf3a34877fdd6d43dca1b5f06aff998d43d2010c46/diff:/var/lib/docker/overlay2/79ceef8e89c4094715c05bafb8c1427d09fb4a99f71f1a6186299303af352c98/diff:/var/lib/docker/overlay2/e034e76eeaff4e68d5158ffe54b55d122124ad82998be242fa08bec3010a0e64/diff",
	                "MergedDir": "/var/lib/docker/overlay2/820efae240ead1a66f2e9defebf9cf2e014dd0f2b2f2ee305eaeb3dc4a607853/merged",
	                "UpperDir": "/var/lib/docker/overlay2/820efae240ead1a66f2e9defebf9cf2e014dd0f2b2f2ee305eaeb3dc4a607853/diff",
	                "WorkDir": "/var/lib/docker/overlay2/820efae240ead1a66f2e9defebf9cf2e014dd0f2b2f2ee305eaeb3dc4a607853/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-20220725130322-44543",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-20220725130322-44543/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-20220725130322-44543",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-20220725130322-44543",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-20220725130322-44543",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "70d480b788296fd47f7de0b511a979d4e408c6c1087e9d6b1bb3eb8888ecf2cd",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "55792"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "55788"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "55789"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "55790"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "55791"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/70d480b78829",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-20220725130322-44543": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "4eac9f19ddf0",
	                        "kubernetes-upgrade-20220725130322-44543"
	                    ],
	                    "NetworkID": "b40b2d7bbb1fde67af31346e07d015c58f9eec418efc5ecaeef6ba62f9af8d7e",
	                    "EndpointID": "e9142bf4061c158096817dd87872afe3a2e7d5926ed01f64f9f055a1db20a613",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-20220725130322-44543 -n kubernetes-upgrade-20220725130322-44543
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-20220725130322-44543 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p kubernetes-upgrade-20220725130322-44543 logs -n 25: (4.127840225s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                   |                 Profile                 |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p pause-20220725130540-44543           | pause-20220725130540-44543              | jenkins | v1.26.0 | 25 Jul 22 13:07 PDT | 25 Jul 22 13:07 PDT |
	|         | --alsologtostderr -v=5                  |                                         |         |         |                     |                     |
	| stop    | -p                                      | kubernetes-upgrade-20220725130322-44543 | jenkins | v1.26.0 | 25 Jul 22 13:07 PDT | 25 Jul 22 13:07 PDT |
	|         | kubernetes-upgrade-20220725130322-44543 |                                         |         |         |                     |                     |
	| start   | -p                                      | kubernetes-upgrade-20220725130322-44543 | jenkins | v1.26.0 | 25 Jul 22 13:07 PDT | 25 Jul 22 13:12 PDT |
	|         | kubernetes-upgrade-20220725130322-44543 |                                         |         |         |                     |                     |
	|         | --memory=2200                           |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2            |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --driver=docker  |                                         |         |         |                     |                     |
	| delete  | -p pause-20220725130540-44543           | pause-20220725130540-44543              | jenkins | v1.26.0 | 25 Jul 22 13:08 PDT | 25 Jul 22 13:08 PDT |
	| start   | -p                                      | NoKubernetes-20220725130838-44543       | jenkins | v1.26.0 | 25 Jul 22 13:08 PDT |                     |
	|         | NoKubernetes-20220725130838-44543       |                                         |         |         |                     |                     |
	|         | --no-kubernetes                         |                                         |         |         |                     |                     |
	|         | --kubernetes-version=1.20               |                                         |         |         |                     |                     |
	|         | --driver=docker                         |                                         |         |         |                     |                     |
	| start   | -p                                      | NoKubernetes-20220725130838-44543       | jenkins | v1.26.0 | 25 Jul 22 13:08 PDT | 25 Jul 22 13:09 PDT |
	|         | NoKubernetes-20220725130838-44543       |                                         |         |         |                     |                     |
	|         | --driver=docker                         |                                         |         |         |                     |                     |
	| start   | -p                                      | NoKubernetes-20220725130838-44543       | jenkins | v1.26.0 | 25 Jul 22 13:09 PDT | 25 Jul 22 13:09 PDT |
	|         | NoKubernetes-20220725130838-44543       |                                         |         |         |                     |                     |
	|         | --no-kubernetes --driver=docker         |                                         |         |         |                     |                     |
	| delete  | -p                                      | NoKubernetes-20220725130838-44543       | jenkins | v1.26.0 | 25 Jul 22 13:09 PDT | 25 Jul 22 13:09 PDT |
	|         | NoKubernetes-20220725130838-44543       |                                         |         |         |                     |                     |
	| start   | -p                                      | NoKubernetes-20220725130838-44543       | jenkins | v1.26.0 | 25 Jul 22 13:09 PDT | 25 Jul 22 13:09 PDT |
	|         | NoKubernetes-20220725130838-44543       |                                         |         |         |                     |                     |
	|         | --no-kubernetes --driver=docker         |                                         |         |         |                     |                     |
	| ssh     | -p                                      | NoKubernetes-20220725130838-44543       | jenkins | v1.26.0 | 25 Jul 22 13:09 PDT |                     |
	|         | NoKubernetes-20220725130838-44543       |                                         |         |         |                     |                     |
	|         | sudo systemctl is-active --quiet        |                                         |         |         |                     |                     |
	|         | service kubelet                         |                                         |         |         |                     |                     |
	| profile | list                                    | minikube                                | jenkins | v1.26.0 | 25 Jul 22 13:09 PDT | 25 Jul 22 13:09 PDT |
	| profile | list --output=json                      | minikube                                | jenkins | v1.26.0 | 25 Jul 22 13:09 PDT | 25 Jul 22 13:09 PDT |
	| stop    | -p                                      | NoKubernetes-20220725130838-44543       | jenkins | v1.26.0 | 25 Jul 22 13:09 PDT | 25 Jul 22 13:09 PDT |
	|         | NoKubernetes-20220725130838-44543       |                                         |         |         |                     |                     |
	| start   | -p                                      | NoKubernetes-20220725130838-44543       | jenkins | v1.26.0 | 25 Jul 22 13:09 PDT | 25 Jul 22 13:09 PDT |
	|         | NoKubernetes-20220725130838-44543       |                                         |         |         |                     |                     |
	|         | --driver=docker                         |                                         |         |         |                     |                     |
	| ssh     | -p                                      | NoKubernetes-20220725130838-44543       | jenkins | v1.26.0 | 25 Jul 22 13:09 PDT |                     |
	|         | NoKubernetes-20220725130838-44543       |                                         |         |         |                     |                     |
	|         | sudo systemctl is-active --quiet        |                                         |         |         |                     |                     |
	|         | service kubelet                         |                                         |         |         |                     |                     |
	| delete  | -p                                      | NoKubernetes-20220725130838-44543       | jenkins | v1.26.0 | 25 Jul 22 13:09 PDT | 25 Jul 22 13:09 PDT |
	|         | NoKubernetes-20220725130838-44543       |                                         |         |         |                     |                     |
	| start   | -p auto-20220725125922-44543            | auto-20220725125922-44543               | jenkins | v1.26.0 | 25 Jul 22 13:09 PDT | 25 Jul 22 13:10 PDT |
	|         | --memory=2048                           |                                         |         |         |                     |                     |
	|         | --alsologtostderr                       |                                         |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m           |                                         |         |         |                     |                     |
	|         | --driver=docker                         |                                         |         |         |                     |                     |
	| ssh     | -p auto-20220725125922-44543            | auto-20220725125922-44543               | jenkins | v1.26.0 | 25 Jul 22 13:10 PDT | 25 Jul 22 13:10 PDT |
	|         | pgrep -a kubelet                        |                                         |         |         |                     |                     |
	| delete  | -p auto-20220725125922-44543            | auto-20220725125922-44543               | jenkins | v1.26.0 | 25 Jul 22 13:10 PDT | 25 Jul 22 13:10 PDT |
	| start   | -p                                      | kindnet-20220725125922-44543            | jenkins | v1.26.0 | 25 Jul 22 13:10 PDT | 25 Jul 22 13:11 PDT |
	|         | kindnet-20220725125922-44543            |                                         |         |         |                     |                     |
	|         | --memory=2048                           |                                         |         |         |                     |                     |
	|         | --alsologtostderr                       |                                         |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m           |                                         |         |         |                     |                     |
	|         | --cni=kindnet --driver=docker           |                                         |         |         |                     |                     |
	| ssh     | -p                                      | kindnet-20220725125922-44543            | jenkins | v1.26.0 | 25 Jul 22 13:11 PDT | 25 Jul 22 13:11 PDT |
	|         | kindnet-20220725125922-44543            |                                         |         |         |                     |                     |
	|         | pgrep -a kubelet                        |                                         |         |         |                     |                     |
	| delete  | -p                                      | kindnet-20220725125922-44543            | jenkins | v1.26.0 | 25 Jul 22 13:11 PDT | 25 Jul 22 13:11 PDT |
	|         | kindnet-20220725125922-44543            |                                         |         |         |                     |                     |
	| start   | -p cilium-20220725125923-44543          | cilium-20220725125923-44543             | jenkins | v1.26.0 | 25 Jul 22 13:11 PDT |                     |
	|         | --memory=2048                           |                                         |         |         |                     |                     |
	|         | --alsologtostderr --wait=true           |                                         |         |         |                     |                     |
	|         | --wait-timeout=5m --cni=cilium          |                                         |         |         |                     |                     |
	|         | --driver=docker                         |                                         |         |         |                     |                     |
	| start   | -p                                      | kubernetes-upgrade-20220725130322-44543 | jenkins | v1.26.0 | 25 Jul 22 13:12 PDT |                     |
	|         | kubernetes-upgrade-20220725130322-44543 |                                         |         |         |                     |                     |
	|         | --memory=2200                           |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0            |                                         |         |         |                     |                     |
	|         | --driver=docker                         |                                         |         |         |                     |                     |
	| start   | -p                                      | kubernetes-upgrade-20220725130322-44543 | jenkins | v1.26.0 | 25 Jul 22 13:12 PDT | 25 Jul 22 13:12 PDT |
	|         | kubernetes-upgrade-20220725130322-44543 |                                         |         |         |                     |                     |
	|         | --memory=2200                           |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2            |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --driver=docker  |                                         |         |         |                     |                     |
	|---------|-----------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/25 13:12:15
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 13:12:15.524080   58231 out.go:296] Setting OutFile to fd 1 ...
	I0725 13:12:15.524208   58231 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 13:12:15.524213   58231 out.go:309] Setting ErrFile to fd 2...
	I0725 13:12:15.524217   58231 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 13:12:15.524318   58231 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/bin
	I0725 13:12:15.524788   58231 out.go:303] Setting JSON to false
	I0725 13:12:15.540021   58231 start.go:115] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":15107,"bootTime":1658764828,"procs":353,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0725 13:12:15.540114   58231 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0725 13:12:15.562448   58231 out.go:177] * [kubernetes-upgrade-20220725130322-44543] minikube v1.26.0 on Darwin 12.4
	I0725 13:12:15.604242   58231 notify.go:193] Checking for updates...
	I0725 13:12:15.641279   58231 out.go:177]   - MINIKUBE_LOCATION=14555
	I0725 13:12:15.700093   58231 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	I0725 13:12:15.721566   58231 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 13:12:15.743019   58231 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 13:12:15.764320   58231 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube
	I0725 13:12:15.786670   58231 config.go:178] Loaded profile config "kubernetes-upgrade-20220725130322-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0725 13:12:15.787302   58231 driver.go:365] Setting default libvirt URI to qemu:///system
	I0725 13:12:15.858243   58231 docker.go:137] docker version: linux-20.10.17
	I0725 13:12:15.858363   58231 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 13:12:15.991872   58231 info.go:265] docker info: {ID:DEPZ:X5EQ:JUVI:7TAY:IXPP:M3QT:KGJK:NK3N:YSWM:HAHS:YLUF:RBE6 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:74 OomKillDisable:false NGoroutines:56 SystemTime:2022-07-25 20:12:15.929717623 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 13:12:16.014053   58231 out.go:177] * Using the docker driver based on existing profile
	I0725 13:12:16.056617   58231 start.go:284] selected driver: docker
	I0725 13:12:16.056671   58231 start.go:808] validating driver "docker" against &{Name:kubernetes-upgrade-20220725130322-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:kubernetes-upgrade-20220725130322-
44543 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 13:12:16.056821   58231 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 13:12:16.059983   58231 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 13:12:16.195923   58231 info.go:265] docker info: {ID:DEPZ:X5EQ:JUVI:7TAY:IXPP:M3QT:KGJK:NK3N:YSWM:HAHS:YLUF:RBE6 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:74 OomKillDisable:false NGoroutines:56 SystemTime:2022-07-25 20:12:16.132833017 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 13:12:16.196079   58231 cni.go:95] Creating CNI manager for ""
	I0725 13:12:16.196093   58231 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 13:12:16.196108   58231 start_flags.go:310] config:
	{Name:kubernetes-upgrade-20220725130322-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:kubernetes-upgrade-20220725130322-44543 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 13:12:16.218431   58231 out.go:177] * Starting control plane node kubernetes-upgrade-20220725130322-44543 in cluster kubernetes-upgrade-20220725130322-44543
	I0725 13:12:16.240129   58231 cache.go:120] Beginning downloading kic base image for docker with docker
	I0725 13:12:16.262005   58231 out.go:177] * Pulling base image ...
	I0725 13:12:16.303888   58231 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0725 13:12:16.303909   58231 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0725 13:12:16.303948   58231 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4
	I0725 13:12:16.303962   58231 cache.go:57] Caching tarball of preloaded images
	I0725 13:12:16.304084   58231 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0725 13:12:16.304095   58231 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.2 on docker
	I0725 13:12:16.304668   58231 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543/config.json ...
	I0725 13:12:16.368086   58231 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0725 13:12:16.368119   58231 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0725 13:12:16.368130   58231 cache.go:208] Successfully downloaded all kic artifacts
	I0725 13:12:16.368189   58231 start.go:370] acquiring machines lock for kubernetes-upgrade-20220725130322-44543: {Name:mk3e7763670f2855f6746ef40eb840a24b5302f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 13:12:16.368272   58231 start.go:374] acquired machines lock for "kubernetes-upgrade-20220725130322-44543" in 64.187µs
	I0725 13:12:16.368292   58231 start.go:95] Skipping create...Using existing machine configuration
	I0725 13:12:16.368304   58231 fix.go:55] fixHost starting: 
	I0725 13:12:16.368548   58231 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220725130322-44543 --format={{.State.Status}}
	I0725 13:12:16.440556   58231 fix.go:103] recreateIfNeeded on kubernetes-upgrade-20220725130322-44543: state=Running err=<nil>
	W0725 13:12:16.440583   58231 fix.go:129] unexpected machine state, will restart: <nil>
	I0725 13:12:16.462851   58231 out.go:177] * Updating the running docker "kubernetes-upgrade-20220725130322-44543" container ...
	I0725 13:12:13.547425   58100 out.go:204]   - Booting up control plane ...
	I0725 13:12:16.505229   58231 machine.go:88] provisioning docker machine ...
	I0725 13:12:16.505281   58231 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-20220725130322-44543"
	I0725 13:12:16.505434   58231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725130322-44543
	I0725 13:12:16.578820   58231 main.go:134] libmachine: Using SSH client type: native
	I0725 13:12:16.579013   58231 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 55792 <nil> <nil>}
	I0725 13:12:16.579025   58231 main.go:134] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-20220725130322-44543 && echo "kubernetes-upgrade-20220725130322-44543" | sudo tee /etc/hostname
	I0725 13:12:16.709375   58231 main.go:134] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-20220725130322-44543
	
	I0725 13:12:16.709450   58231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725130322-44543
	I0725 13:12:16.781330   58231 main.go:134] libmachine: Using SSH client type: native
	I0725 13:12:16.781478   58231 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 55792 <nil> <nil>}
	I0725 13:12:16.781494   58231 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-20220725130322-44543' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-20220725130322-44543/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-20220725130322-44543' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 13:12:16.904164   58231 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 13:12:16.904186   58231 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube}
	I0725 13:12:16.904205   58231 ubuntu.go:177] setting up certificates
	I0725 13:12:16.904218   58231 provision.go:83] configureAuth start
	I0725 13:12:16.904310   58231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220725130322-44543
	I0725 13:12:16.975644   58231 provision.go:138] copyHostCerts
	I0725 13:12:16.975735   58231 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem, removing ...
	I0725 13:12:16.975745   58231 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem
	I0725 13:12:16.975854   58231 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem (1082 bytes)
	I0725 13:12:16.976046   58231 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem, removing ...
	I0725 13:12:16.976056   58231 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem
	I0725 13:12:16.976115   58231 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem (1123 bytes)
	I0725 13:12:16.976279   58231 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem, removing ...
	I0725 13:12:16.976285   58231 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem
	I0725 13:12:16.976344   58231 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem (1675 bytes)
	I0725 13:12:16.976461   58231 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-20220725130322-44543 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-20220725130322-44543]
	I0725 13:12:17.244166   58231 provision.go:172] copyRemoteCerts
	I0725 13:12:17.244226   58231 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 13:12:17.244266   58231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725130322-44543
	I0725 13:12:17.337978   58231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55792 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/kubernetes-upgrade-20220725130322-44543/id_rsa Username:docker}
	I0725 13:12:17.424138   58231 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0725 13:12:17.441629   58231 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem --> /etc/docker/server.pem (1289 bytes)
	I0725 13:12:17.458875   58231 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 13:12:17.475894   58231 provision.go:86] duration metric: configureAuth took 571.653405ms
	I0725 13:12:17.475907   58231 ubuntu.go:193] setting minikube options for container-runtime
	I0725 13:12:17.476043   58231 config.go:178] Loaded profile config "kubernetes-upgrade-20220725130322-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0725 13:12:17.476092   58231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725130322-44543
	I0725 13:12:17.548703   58231 main.go:134] libmachine: Using SSH client type: native
	I0725 13:12:17.548860   58231 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 55792 <nil> <nil>}
	I0725 13:12:17.548869   58231 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 13:12:17.671263   58231 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0725 13:12:17.671284   58231 ubuntu.go:71] root file system type: overlay
	I0725 13:12:17.671422   58231 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 13:12:17.671500   58231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725130322-44543
	I0725 13:12:17.743315   58231 main.go:134] libmachine: Using SSH client type: native
	I0725 13:12:17.743480   58231 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 55792 <nil> <nil>}
	I0725 13:12:17.743531   58231 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 13:12:17.873849   58231 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 13:12:17.873953   58231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725130322-44543
	I0725 13:12:17.946520   58231 main.go:134] libmachine: Using SSH client type: native
	I0725 13:12:17.946700   58231 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 55792 <nil> <nil>}
	I0725 13:12:17.946714   58231 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 13:12:18.072920   58231 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 13:12:18.072934   58231 machine.go:91] provisioned docker machine in 1.567656156s
	I0725 13:12:18.072942   58231 start.go:307] post-start starting for "kubernetes-upgrade-20220725130322-44543" (driver="docker")
	I0725 13:12:18.072947   58231 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 13:12:18.073005   58231 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 13:12:18.073072   58231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725130322-44543
	I0725 13:12:18.146111   58231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55792 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/kubernetes-upgrade-20220725130322-44543/id_rsa Username:docker}
	I0725 13:12:18.232526   58231 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 13:12:18.236482   58231 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0725 13:12:18.236510   58231 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0725 13:12:18.236519   58231 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0725 13:12:18.236527   58231 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0725 13:12:18.236540   58231 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/addons for local assets ...
	I0725 13:12:18.236661   58231 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files for local assets ...
	I0725 13:12:18.236802   58231 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem -> 445432.pem in /etc/ssl/certs
	I0725 13:12:18.236966   58231 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 13:12:18.245289   58231 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem --> /etc/ssl/certs/445432.pem (1708 bytes)
	I0725 13:12:18.263283   58231 start.go:310] post-start completed in 190.328494ms
	I0725 13:12:18.263356   58231 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 13:12:18.263408   58231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725130322-44543
	I0725 13:12:18.335545   58231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55792 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/kubernetes-upgrade-20220725130322-44543/id_rsa Username:docker}
	I0725 13:12:18.423535   58231 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0725 13:12:18.428970   58231 fix.go:57] fixHost completed within 2.060624725s
	I0725 13:12:18.428986   58231 start.go:82] releasing machines lock for "kubernetes-upgrade-20220725130322-44543", held for 2.06066512s
	I0725 13:12:18.429077   58231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220725130322-44543
	I0725 13:12:18.506072   58231 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0725 13:12:18.506074   58231 ssh_runner.go:195] Run: systemctl --version
	I0725 13:12:18.506165   58231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725130322-44543
	I0725 13:12:18.506162   58231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725130322-44543
	I0725 13:12:18.599802   58231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55792 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/kubernetes-upgrade-20220725130322-44543/id_rsa Username:docker}
	I0725 13:12:18.602583   58231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55792 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/kubernetes-upgrade-20220725130322-44543/id_rsa Username:docker}
	I0725 13:12:18.816158   58231 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 13:12:18.826302   58231 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0725 13:12:18.826381   58231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 13:12:18.835772   58231 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 13:12:18.848123   58231 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 13:12:18.948264   58231 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 13:12:19.037944   58231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 13:12:19.142395   58231 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 13:12:21.985604   58231 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.843126733s)
	I0725 13:12:21.985677   58231 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0725 13:12:22.067743   58231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 13:12:22.162678   58231 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0725 13:12:22.186199   58231 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0725 13:12:22.186287   58231 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0725 13:12:22.194738   58231 start.go:471] Will wait 60s for crictl version
	I0725 13:12:22.194803   58231 ssh_runner.go:195] Run: sudo crictl version
	I0725 13:12:22.281495   58231 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0725 13:12:22.281593   58231 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 13:12:22.329057   58231 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 13:12:25.077574   58100 out.go:204]   - Configuring RBAC rules ...
	I0725 13:12:25.465302   58100 cni.go:95] Creating CNI manager for "cilium"
	I0725 13:12:25.523446   58100 out.go:177] * Configuring Cilium (Container Networking Interface) ...
	I0725 13:12:22.429809   58231 out.go:204] * Preparing Kubernetes v1.24.2 on Docker 20.10.17 ...
	I0725 13:12:22.429976   58231 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-20220725130322-44543 dig +short host.docker.internal
	I0725 13:12:22.598232   58231 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0725 13:12:22.598476   58231 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0725 13:12:22.605561   58231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725130322-44543
	I0725 13:12:22.687883   58231 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0725 13:12:22.687964   58231 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 13:12:22.759008   58231 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.2
	k8s.gcr.io/kube-proxy:v1.24.2
	k8s.gcr.io/kube-scheduler:v1.24.2
	k8s.gcr.io/kube-controller-manager:v1.24.2
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	<none>:<none>
	<none>:<none>
	<none>:<none>
	<none>:<none>
	<none>:<none>
	k8s.gcr.io/coredns:1.6.2
	<none>:<none>
	
	-- /stdout --
	I0725 13:12:22.759038   58231 docker.go:542] Images already preloaded, skipping extraction
	I0725 13:12:22.759139   58231 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 13:12:22.800690   58231 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.2
	k8s.gcr.io/kube-controller-manager:v1.24.2
	k8s.gcr.io/kube-proxy:v1.24.2
	k8s.gcr.io/kube-scheduler:v1.24.2
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	<none>:<none>
	<none>:<none>
	<none>:<none>
	<none>:<none>
	<none>:<none>
	k8s.gcr.io/coredns:1.6.2
	<none>:<none>
	
	-- /stdout --
	I0725 13:12:22.800718   58231 cache_images.go:84] Images are preloaded, skipping loading
	I0725 13:12:22.800805   58231 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0725 13:12:22.981658   58231 cni.go:95] Creating CNI manager for ""
	I0725 13:12:22.981673   58231 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 13:12:22.981687   58231 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0725 13:12:22.981707   58231 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-20220725130322-44543 NodeName:kubernetes-upgrade-20220725130322-44543 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:system
d ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0725 13:12:22.981840   58231 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "kubernetes-upgrade-20220725130322-44543"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 13:12:22.981928   58231 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=kubernetes-upgrade-20220725130322-44543 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.2 ClusterName:kubernetes-upgrade-20220725130322-44543 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0725 13:12:22.981988   58231 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
	I0725 13:12:22.992028   58231 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 13:12:22.992085   58231 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 13:12:23.005044   58231 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (501 bytes)
	I0725 13:12:23.075091   58231 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 13:12:23.103002   58231 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2061 bytes)
	I0725 13:12:23.146171   58231 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0725 13:12:23.159639   58231 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543 for IP: 192.168.76.2
	I0725 13:12:23.159769   58231 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.key
	I0725 13:12:23.159825   58231 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.key
	I0725 13:12:23.159945   58231 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543/client.key
	I0725 13:12:23.160013   58231 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543/apiserver.key.31bdca25
	I0725 13:12:23.160090   58231 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543/proxy-client.key
	I0725 13:12:23.160414   58231 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543.pem (1338 bytes)
	W0725 13:12:23.160480   58231 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543_empty.pem, impossibly tiny 0 bytes
	I0725 13:12:23.160513   58231 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 13:12:23.160548   58231 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem (1082 bytes)
	I0725 13:12:23.160625   58231 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem (1123 bytes)
	I0725 13:12:23.160666   58231 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem (1675 bytes)
	I0725 13:12:23.160745   58231 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem (1708 bytes)
	I0725 13:12:23.161635   58231 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0725 13:12:23.198821   58231 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 13:12:23.261519   58231 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 13:12:23.309770   58231 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0725 13:12:23.359236   58231 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 13:12:23.382285   58231 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 13:12:23.406734   58231 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 13:12:23.439540   58231 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0725 13:12:23.469001   58231 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem --> /usr/share/ca-certificates/445432.pem (1708 bytes)
	I0725 13:12:23.501596   58231 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 13:12:23.535861   58231 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543.pem --> /usr/share/ca-certificates/44543.pem (1338 bytes)
	I0725 13:12:23.558664   58231 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 13:12:23.581536   58231 ssh_runner.go:195] Run: openssl version
	I0725 13:12:23.588354   58231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44543.pem && ln -fs /usr/share/ca-certificates/44543.pem /etc/ssl/certs/44543.pem"
	I0725 13:12:23.598355   58231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44543.pem
	I0725 13:12:23.603042   58231 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 25 19:24 /usr/share/ca-certificates/44543.pem
	I0725 13:12:23.603101   58231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44543.pem
	I0725 13:12:23.610190   58231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/44543.pem /etc/ssl/certs/51391683.0"
	I0725 13:12:23.618961   58231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/445432.pem && ln -fs /usr/share/ca-certificates/445432.pem /etc/ssl/certs/445432.pem"
	I0725 13:12:23.629334   58231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/445432.pem
	I0725 13:12:23.633885   58231 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 25 19:24 /usr/share/ca-certificates/445432.pem
	I0725 13:12:23.633961   58231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/445432.pem
	I0725 13:12:23.639765   58231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/445432.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 13:12:23.648214   58231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 13:12:23.657204   58231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 13:12:23.661938   58231 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 25 19:19 /usr/share/ca-certificates/minikubeCA.pem
	I0725 13:12:23.661991   58231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 13:12:23.668956   58231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 13:12:23.677476   58231 kubeadm.go:395] StartCluster: {Name:kubernetes-upgrade-20220725130322-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:kubernetes-upgrade-20220725130322-44543 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 13:12:23.677575   58231 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 13:12:23.711452   58231 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 13:12:23.719851   58231 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0725 13:12:23.719865   58231 kubeadm.go:626] restartCluster start
	I0725 13:12:23.719916   58231 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 13:12:23.726955   58231 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:12:23.727017   58231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725130322-44543
	I0725 13:12:23.806435   58231 kubeconfig.go:92] found "kubernetes-upgrade-20220725130322-44543" server: "https://127.0.0.1:55791"
	I0725 13:12:23.806896   58231 kapi.go:59] client config for kubernetes-upgrade-20220725130322-44543: &rest.Config{Host:"https://127.0.0.1:55791", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kuber
netes-upgrade-20220725130322-44543/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22fcfe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0725 13:12:23.807446   58231 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 13:12:23.816467   58231 api_server.go:165] Checking apiserver status ...
	I0725 13:12:23.816540   58231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:12:23.828968   58231 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/10758/cgroup
	W0725 13:12:23.840883   58231 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/10758/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:12:23.840902   58231 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55791/healthz ...
	I0725 13:12:25.544772   58100 ssh_runner.go:195] Run: sudo /bin/bash -c "grep 'bpffs /sys/fs/bpf' /proc/mounts || sudo mount bpffs -t bpf /sys/fs/bpf"
	I0725 13:12:25.573779   58100 cilium.go:816] Using pod CIDR: 10.244.0.0/16
	I0725 13:12:25.573791   58100 cilium.go:827] cilium options: {PodSubnet:10.244.0.0/16}
	I0725 13:12:25.573837   58100 cilium.go:831] cilium config:
	---
	# Source: cilium/templates/cilium-agent-serviceaccount.yaml
	apiVersion: v1
	kind: ServiceAccount
	metadata:
	  name: cilium
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-operator-serviceaccount.yaml
	apiVersion: v1
	kind: ServiceAccount
	metadata:
	  name: cilium-operator
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-configmap.yaml
	apiVersion: v1
	kind: ConfigMap
	metadata:
	  name: cilium-config
	  namespace: kube-system
	data:
	
	  # Identity allocation mode selects how identities are shared between cilium
	  # nodes by setting how they are stored. The options are "crd" or "kvstore".
	  # - "crd" stores identities in kubernetes as CRDs (custom resource definition).
	  #   These can be queried with:
	  #     kubectl get ciliumid
	  # - "kvstore" stores identities in a kvstore, etcd or consul, that is
	  #   configured below. Cilium versions before 1.6 supported only the kvstore
	  #   backend. Upgrades from these older cilium versions should continue using
	  #   the kvstore by commenting out the identity-allocation-mode below, or
	  #   setting it to "kvstore".
	  identity-allocation-mode: crd
	  cilium-endpoint-gc-interval: "5m0s"
	
	  # If you want to run cilium in debug mode change this value to true
	  debug: "false"
	  # The agent can be put into the following three policy enforcement modes
	  # default, always and never.
	  # https://docs.cilium.io/en/latest/policy/intro/#policy-enforcement-modes
	  enable-policy: "default"
	
	  # Enable IPv4 addressing. If enabled, all endpoints are allocated an IPv4
	  # address.
	  enable-ipv4: "true"
	
	  # Enable IPv6 addressing. If enabled, all endpoints are allocated an IPv6
	  # address.
	  enable-ipv6: "false"
	  # Users who wish to specify their own custom CNI configuration file must set
	  # custom-cni-conf to "true", otherwise Cilium may overwrite the configuration.
	  custom-cni-conf: "false"
	  enable-bpf-clock-probe: "true"
	  # If you want cilium monitor to aggregate tracing for packets, set this level
	  # to "low", "medium", or "maximum". The higher the level, the less packets
	  # that will be seen in monitor output.
	  monitor-aggregation: medium
	
	  # The monitor aggregation interval governs the typical time between monitor
	  # notification events for each allowed connection.
	  #
	  # Only effective when monitor aggregation is set to "medium" or higher.
	  monitor-aggregation-interval: 5s
	
	  # The monitor aggregation flags determine which TCP flags which, upon the
	  # first observation, cause monitor notifications to be generated.
	  #
	  # Only effective when monitor aggregation is set to "medium" or higher.
	  monitor-aggregation-flags: all
	  # Specifies the ratio (0.0-1.0) of total system memory to use for dynamic
	  # sizing of the TCP CT, non-TCP CT, NAT and policy BPF maps.
	  bpf-map-dynamic-size-ratio: "0.0025"
	  # bpf-policy-map-max specifies the maximum number of entries in endpoint
	  # policy map (per endpoint)
	  bpf-policy-map-max: "16384"
	  # bpf-lb-map-max specifies the maximum number of entries in bpf lb service,
	  # backend and affinity maps.
	  bpf-lb-map-max: "65536"
	  # Pre-allocation of map entries allows per-packet latency to be reduced, at
	  # the expense of up-front memory allocation for the entries in the maps. The
	  # default value below will minimize memory usage in the default installation;
	  # users who are sensitive to latency may consider setting this to "true".
	  #
	  # This option was introduced in Cilium 1.4. Cilium 1.3 and earlier ignore
	  # this option and behave as though it is set to "true".
	  #
	  # If this value is modified, then during the next Cilium startup the restore
	  # of existing endpoints and tracking of ongoing connections may be disrupted.
	  # As a result, reply packets may be dropped and the load-balancing decisions
	  # for established connections may change.
	  #
	  # If this option is set to "false" during an upgrade from 1.3 or earlier to
	  # 1.4 or later, then it may cause one-time disruptions during the upgrade.
	  preallocate-bpf-maps: "false"
	
	  # Regular expression matching compatible Istio sidecar istio-proxy
	  # container image names
	  sidecar-istio-proxy-image: "cilium/istio_proxy"
	
	  # Name of the cluster. Only relevant when building a mesh of clusters.
	  cluster-name: default
	  # Unique ID of the cluster. Must be unique across all conneted clusters and
	  # in the range of 1 and 255. Only relevant when building a mesh of clusters.
	  cluster-id: ""
	
	  # Encapsulation mode for communication between nodes
	  # Possible values:
	  #   - disabled
	  #   - vxlan (default)
	  #   - geneve
	  tunnel: vxlan
	  # Enables L7 proxy for L7 policy enforcement and visibility
	  enable-l7-proxy: "true"
	
	  # wait-bpf-mount makes init container wait until bpf filesystem is mounted
	  wait-bpf-mount: "false"
	
	  masquerade: "true"
	  enable-bpf-masquerade: "true"
	
	  enable-xt-socket-fallback: "true"
	  install-iptables-rules: "true"
	
	  auto-direct-node-routes: "false"
	  enable-bandwidth-manager: "false"
	  enable-local-redirect-policy: "false"
	  kube-proxy-replacement:  "probe"
	  kube-proxy-replacement-healthz-bind-address: ""
	  enable-health-check-nodeport: "true"
	  node-port-bind-protection: "true"
	  enable-auto-protect-node-port-range: "true"
	  enable-session-affinity: "true"
	  k8s-require-ipv4-pod-cidr: "true"
	  k8s-require-ipv6-pod-cidr: "false"
	  enable-endpoint-health-checking: "true"
	  enable-health-checking: "true"
	  enable-well-known-identities: "false"
	  enable-remote-node-identity: "true"
	  operator-api-serve-addr: "127.0.0.1:9234"
	  # Enable Hubble gRPC service.
	  enable-hubble: "true"
	  # UNIX domain socket for Hubble server to listen to.
	  hubble-socket-path:  "/var/run/cilium/hubble.sock"
	  # An additional address for Hubble server to listen to (e.g. ":4244").
	  hubble-listen-address: ":4244"
	  hubble-disable-tls: "false"
	  hubble-tls-cert-file: /var/lib/cilium/tls/hubble/server.crt
	  hubble-tls-key-file: /var/lib/cilium/tls/hubble/server.key
	  hubble-tls-client-ca-files: /var/lib/cilium/tls/hubble/client-ca.crt
	  ipam: "cluster-pool"
	  cluster-pool-ipv4-cidr: "10.244.0.0/16"
	  cluster-pool-ipv4-mask-size: "24"
	  disable-cnp-status-updates: "true"
	  cgroup-root: "/run/cilium/cgroupv2"
	---
	# Source: cilium/templates/cilium-agent-clusterrole.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRole
	metadata:
	  name: cilium
	rules:
	- apiGroups:
	  - networking.k8s.io
	  resources:
	  - networkpolicies
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - discovery.k8s.io
	  resources:
	  - endpointslices
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - ""
	  resources:
	  - namespaces
	  - services
	  - nodes
	  - endpoints
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - ""
	  resources:
	  - pods
	  - pods/finalizers
	  verbs:
	  - get
	  - list
	  - watch
	  - update
	  - delete
	- apiGroups:
	  - ""
	  resources:
	  - nodes
	  verbs:
	  - get
	  - list
	  - watch
	  - update
	- apiGroups:
	  - ""
	  resources:
	  - nodes
	  - nodes/status
	  verbs:
	  - patch
	- apiGroups:
	  - apiextensions.k8s.io
	  resources:
	  - customresourcedefinitions
	  verbs:
	  # Deprecated for removal in v1.10
	  - create
	  - list
	  - watch
	  - update
	
	  # This is used when validating policies in preflight. This will need to stay
	  # until we figure out how to avoid "get" inside the preflight, and then
	  # should be removed ideally.
	  - get
	- apiGroups:
	  - cilium.io
	  resources:
	  - ciliumnetworkpolicies
	  - ciliumnetworkpolicies/status
	  - ciliumnetworkpolicies/finalizers
	  - ciliumclusterwidenetworkpolicies
	  - ciliumclusterwidenetworkpolicies/status
	  - ciliumclusterwidenetworkpolicies/finalizers
	  - ciliumendpoints
	  - ciliumendpoints/status
	  - ciliumendpoints/finalizers
	  - ciliumnodes
	  - ciliumnodes/status
	  - ciliumnodes/finalizers
	  - ciliumidentities
	  - ciliumidentities/finalizers
	  - ciliumlocalredirectpolicies
	  - ciliumlocalredirectpolicies/status
	  - ciliumlocalredirectpolicies/finalizers
	  verbs:
	  - '*'
	---
	# Source: cilium/templates/cilium-operator-clusterrole.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRole
	metadata:
	  name: cilium-operator
	rules:
	- apiGroups:
	  - ""
	  resources:
	  # to automatically delete [core|kube]dns pods so that are starting to being
	  # managed by Cilium
	  - pods
	  verbs:
	  - get
	  - list
	  - watch
	  - delete
	- apiGroups:
	  - discovery.k8s.io
	  resources:
	  - endpointslices
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - ""
	  resources:
	  # to perform the translation of a CNP that contains 'ToGroup' to its endpoints
	  - services
	  - endpoints
	  # to check apiserver connectivity
	  - namespaces
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - cilium.io
	  resources:
	  - ciliumnetworkpolicies
	  - ciliumnetworkpolicies/status
	  - ciliumnetworkpolicies/finalizers
	  - ciliumclusterwidenetworkpolicies
	  - ciliumclusterwidenetworkpolicies/status
	  - ciliumclusterwidenetworkpolicies/finalizers
	  - ciliumendpoints
	  - ciliumendpoints/status
	  - ciliumendpoints/finalizers
	  - ciliumnodes
	  - ciliumnodes/status
	  - ciliumnodes/finalizers
	  - ciliumidentities
	  - ciliumidentities/status
	  - ciliumidentities/finalizers
	  - ciliumlocalredirectpolicies
	  - ciliumlocalredirectpolicies/status
	  - ciliumlocalredirectpolicies/finalizers
	  verbs:
	  - '*'
	- apiGroups:
	  - apiextensions.k8s.io
	  resources:
	  - customresourcedefinitions
	  verbs:
	  - create
	  - get
	  - list
	  - update
	  - watch
	# For cilium-operator running in HA mode.
	#
	# Cilium operator running in HA mode requires the use of ResourceLock for Leader Election
	# between multiple running instances.
	# The preferred way of doing this is to use LeasesResourceLock as edits to Leases are less
	# common and fewer objects in the cluster watch "all Leases".
	# The support for leases was introduced in coordination.k8s.io/v1 during Kubernetes 1.14 release.
	# In Cilium we currently don't support HA mode for K8s version < 1.14. This condition make sure
	# that we only authorize access to leases resources in supported K8s versions.
	- apiGroups:
	  - coordination.k8s.io
	  resources:
	  - leases
	  verbs:
	  - create
	  - get
	  - update
	---
	# Source: cilium/templates/cilium-agent-clusterrolebinding.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRoleBinding
	metadata:
	  name: cilium
	roleRef:
	  apiGroup: rbac.authorization.k8s.io
	  kind: ClusterRole
	  name: cilium
	subjects:
	- kind: ServiceAccount
	  name: cilium
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-operator-clusterrolebinding.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRoleBinding
	metadata:
	  name: cilium-operator
	roleRef:
	  apiGroup: rbac.authorization.k8s.io
	  kind: ClusterRole
	  name: cilium-operator
	subjects:
	- kind: ServiceAccount
	  name: cilium-operator
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-agent-daemonset.yaml
	apiVersion: apps/v1
	kind: DaemonSet
	metadata:
	  labels:
	    k8s-app: cilium
	  name: cilium
	  namespace: kube-system
	spec:
	  selector:
	    matchLabels:
	      k8s-app: cilium
	  updateStrategy:
	    rollingUpdate:
	      maxUnavailable: 2
	    type: RollingUpdate
	  template:
	    metadata:
	      annotations:
	        # This annotation plus the CriticalAddonsOnly toleration makes
	        # cilium to be a critical pod in the cluster, which ensures cilium
	        # gets priority scheduling.
	        # https://kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/
	        scheduler.alpha.kubernetes.io/critical-pod: ""
	      labels:
	        k8s-app: cilium
	    spec:
	      affinity:
	        podAntiAffinity:
	          requiredDuringSchedulingIgnoredDuringExecution:
	          - labelSelector:
	              matchExpressions:
	              - key: k8s-app
	                operator: In
	                values:
	                - cilium
	            topologyKey: kubernetes.io/hostname
	      containers:
	      - args:
	        - --config-dir=/tmp/cilium/config-map
	        command:
	        - cilium-agent
	        livenessProbe:
	          httpGet:
	            host: '127.0.0.1'
	            path: /healthz
	            port: 9876
	            scheme: HTTP
	            httpHeaders:
	            - name: "brief"
	              value: "true"
	          failureThreshold: 10
	          # The initial delay for the liveness probe is intentionally large to
	          # avoid an endless kill & restart cycle if in the event that the initial
	          # bootstrapping takes longer than expected.
	          initialDelaySeconds: 120
	          periodSeconds: 30
	          successThreshold: 1
	          timeoutSeconds: 5
	        readinessProbe:
	          httpGet:
	            host: '127.0.0.1'
	            path: /healthz
	            port: 9876
	            scheme: HTTP
	            httpHeaders:
	            - name: "brief"
	              value: "true"
	          failureThreshold: 3
	          initialDelaySeconds: 5
	          periodSeconds: 30
	          successThreshold: 1
	          timeoutSeconds: 5
	        env:
	        - name: K8S_NODE_NAME
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: spec.nodeName
	        - name: CILIUM_K8S_NAMESPACE
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: metadata.namespace
	        - name: CILIUM_FLANNEL_MASTER_DEVICE
	          valueFrom:
	            configMapKeyRef:
	              key: flannel-master-device
	              name: cilium-config
	              optional: true
	        - name: CILIUM_FLANNEL_UNINSTALL_ON_EXIT
	          valueFrom:
	            configMapKeyRef:
	              key: flannel-uninstall-on-exit
	              name: cilium-config
	              optional: true
	        - name: CILIUM_CLUSTERMESH_CONFIG
	          value: /var/lib/cilium/clustermesh/
	        - name: CILIUM_CNI_CHAINING_MODE
	          valueFrom:
	            configMapKeyRef:
	              key: cni-chaining-mode
	              name: cilium-config
	              optional: true
	        - name: CILIUM_CUSTOM_CNI_CONF
	          valueFrom:
	            configMapKeyRef:
	              key: custom-cni-conf
	              name: cilium-config
	              optional: true
	        image: "quay.io/cilium/cilium:v1.9.9@sha256:a85d5cff13f8231c2e267d9fc3c6e43d24be4a75dac9f641c11ec46e7f17624d"
	        imagePullPolicy: IfNotPresent
	        lifecycle:
	          postStart:
	            exec:
	              command:
	              - "/cni-install.sh"
	              - "--enable-debug=false"
	          preStop:
	            exec:
	              command:
	              - /cni-uninstall.sh
	        name: cilium-agent
	        securityContext:
	          capabilities:
	            add:
	            - NET_ADMIN
	            - SYS_MODULE
	          privileged: true
	        volumeMounts:
	        - mountPath: /sys/fs/bpf
	          name: bpf-maps
	        - mountPath: /var/run/cilium
	          name: cilium-run
	        - mountPath: /host/opt/cni/bin
	          name: cni-path
	        - mountPath: /host/etc/cni/net.d
	          name: etc-cni-netd
	        - mountPath: /var/lib/cilium/clustermesh
	          name: clustermesh-secrets
	          readOnly: true
	        - mountPath: /tmp/cilium/config-map
	          name: cilium-config-path
	          readOnly: true
	          # Needed to be able to load kernel modules
	        - mountPath: /lib/modules
	          name: lib-modules
	          readOnly: true
	        - mountPath: /run/xtables.lock
	          name: xtables-lock
	        - mountPath: /var/lib/cilium/tls/hubble
	          name: hubble-tls
	          readOnly: true
	      hostNetwork: true
	      initContainers:
	      # Required to mount cgroup2 filesystem on the underlying Kubernetes node.
	      # We use nsenter command with host's cgroup and mount namespaces enabled.
	      - name: mount-cgroup
	        env:
	          - name: CGROUP_ROOT
	            value: /run/cilium/cgroupv2
	          - name: BIN_PATH
	            value: /opt/cni/bin
	        command:
	          - sh
	          - -c
	          # The statically linked Go program binary is invoked to avoid any
	          # dependency on utilities like sh and mount that can be missing on certain
	          # distros installed on the underlying host. Copy the binary to the
	          # same directory where we install cilium cni plugin so that exec permissions
	          # are available.
	          - 'cp /usr/bin/cilium-mount /hostbin/cilium-mount && nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; rm /hostbin/cilium-mount'
	        image: "quay.io/cilium/cilium:v1.9.9@sha256:a85d5cff13f8231c2e267d9fc3c6e43d24be4a75dac9f641c11ec46e7f17624d"
	        imagePullPolicy: IfNotPresent
	        volumeMounts:
	          - mountPath: /hostproc
	            name: hostproc
	          - mountPath: /hostbin
	            name: cni-path
	        securityContext:
	          privileged: true
	      - command:
	        - /init-container.sh
	        env:
	        - name: CILIUM_ALL_STATE
	          valueFrom:
	            configMapKeyRef:
	              key: clean-cilium-state
	              name: cilium-config
	              optional: true
	        - name: CILIUM_BPF_STATE
	          valueFrom:
	            configMapKeyRef:
	              key: clean-cilium-bpf-state
	              name: cilium-config
	              optional: true
	        - name: CILIUM_WAIT_BPF_MOUNT
	          valueFrom:
	            configMapKeyRef:
	              key: wait-bpf-mount
	              name: cilium-config
	              optional: true
	        image: "quay.io/cilium/cilium:v1.9.9@sha256:a85d5cff13f8231c2e267d9fc3c6e43d24be4a75dac9f641c11ec46e7f17624d"
	        imagePullPolicy: IfNotPresent
	        name: clean-cilium-state
	        securityContext:
	          capabilities:
	            add:
	            - NET_ADMIN
	          privileged: true
	        volumeMounts:
	        - mountPath: /sys/fs/bpf
	          name: bpf-maps
	          mountPropagation: HostToContainer
	          # Required to mount cgroup filesystem from the host to cilium agent pod
	        - mountPath: /run/cilium/cgroupv2
	          name: cilium-cgroup
	          mountPropagation: HostToContainer
	        - mountPath: /var/run/cilium
	          name: cilium-run
	        resources:
	          requests:
	            cpu: 100m
	            memory: 100Mi
	      restartPolicy: Always
	      priorityClassName: system-node-critical
	      serviceAccount: cilium
	      serviceAccountName: cilium
	      terminationGracePeriodSeconds: 1
	      tolerations:
	      - operator: Exists
	      volumes:
	        # To keep state between restarts / upgrades
	      - hostPath:
	          path: /var/run/cilium
	          type: DirectoryOrCreate
	        name: cilium-run
	        # To keep state between restarts / upgrades for bpf maps
	      - hostPath:
	          path: /sys/fs/bpf
	          type: DirectoryOrCreate
	        name: bpf-maps
	      # To mount cgroup2 filesystem on the host
	      - hostPath:
	          path: /proc
	          type: Directory
	        name: hostproc
	      # To keep state between restarts / upgrades for cgroup2 filesystem
	      - hostPath:
	          path: /run/cilium/cgroupv2
	          type: DirectoryOrCreate
	        name: cilium-cgroup
	      # To install cilium cni plugin in the host
	      - hostPath:
	          path:  /opt/cni/bin
	          type: DirectoryOrCreate
	        name: cni-path
	        # To install cilium cni configuration in the host
	      - hostPath:
	          path: /etc/cni/net.d
	          type: DirectoryOrCreate
	        name: etc-cni-netd
	        # To be able to load kernel modules
	      - hostPath:
	          path: /lib/modules
	        name: lib-modules
	        # To access iptables concurrently with other processes (e.g. kube-proxy)
	      - hostPath:
	          path: /run/xtables.lock
	          type: FileOrCreate
	        name: xtables-lock
	        # To read the clustermesh configuration
	      - name: clustermesh-secrets
	        secret:
	          defaultMode: 420
	          optional: true
	          secretName: cilium-clustermesh
	        # To read the configuration from the config map
	      - configMap:
	          name: cilium-config
	        name: cilium-config-path
	      - name: hubble-tls
	        projected:
	          sources:
	          - secret:
	              name: hubble-server-certs
	              items:
	                - key: tls.crt
	                  path: server.crt
	                - key: tls.key
	                  path: server.key
	              optional: true
	          - configMap:
	              name: hubble-ca-cert
	              items:
	                - key: ca.crt
	                  path: client-ca.crt
	              optional: true
	---
	# Source: cilium/templates/cilium-operator-deployment.yaml
	apiVersion: apps/v1
	kind: Deployment
	metadata:
	  labels:
	    io.cilium/app: operator
	    name: cilium-operator
	  name: cilium-operator
	  namespace: kube-system
	spec:
	  # We support HA mode only for Kubernetes version > 1.14
	  # See docs on ServerCapabilities.LeasesResourceLock in file pkg/k8s/version/version.go
	  # for more details.
	  replicas: 1
	  selector:
	    matchLabels:
	      io.cilium/app: operator
	      name: cilium-operator
	  strategy:
	    rollingUpdate:
	      maxSurge: 1
	      maxUnavailable: 1
	    type: RollingUpdate
	  template:
	    metadata:
	      annotations:
	      labels:
	        io.cilium/app: operator
	        name: cilium-operator
	    spec:
	      # In HA mode, cilium-operator pods must not be scheduled on the same
	      # node as they will clash with each other.
	      affinity:
	        podAntiAffinity:
	          requiredDuringSchedulingIgnoredDuringExecution:
	          - labelSelector:
	              matchExpressions:
	              - key: io.cilium/app
	                operator: In
	                values:
	                - operator
	            topologyKey: kubernetes.io/hostname
	      containers:
	      - args:
	        - --config-dir=/tmp/cilium/config-map
	        - --debug=$(CILIUM_DEBUG)
	        command:
	        - cilium-operator-generic
	        env:
	        - name: K8S_NODE_NAME
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: spec.nodeName
	        - name: CILIUM_K8S_NAMESPACE
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: metadata.namespace
	        - name: CILIUM_DEBUG
	          valueFrom:
	            configMapKeyRef:
	              key: debug
	              name: cilium-config
	              optional: true
	        image: "quay.io/cilium/operator-generic:v1.9.9@sha256:3726a965cd960295ca3c5e7f2b543c02096c0912c6652eb8bbb9ce54bcaa99d8"
	        imagePullPolicy: IfNotPresent
	        name: cilium-operator
	        livenessProbe:
	          httpGet:
	            host: '127.0.0.1'
	            path: /healthz
	            port: 9234
	            scheme: HTTP
	          initialDelaySeconds: 60
	          periodSeconds: 10
	          timeoutSeconds: 3
	        volumeMounts:
	        - mountPath: /tmp/cilium/config-map
	          name: cilium-config-path
	          readOnly: true
	      hostNetwork: true
	      restartPolicy: Always
	      priorityClassName: system-cluster-critical
	      serviceAccount: cilium-operator
	      serviceAccountName: cilium-operator
	      tolerations:
	      - operator: Exists
	      volumes:
	        # To read the configuration from the config map
	      - configMap:
	          name: cilium-config
	        name: cilium-config-path
	
	I0725 13:12:25.573876   58100 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.2/kubectl ...
	I0725 13:12:25.573893   58100 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (23204 bytes)
	I0725 13:12:25.591244   58100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0725 13:12:26.179313   58100 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 13:12:26.179392   58100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:12:26.179393   58100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=a5b59bcfc16aadb787d3d4f0635e06172b98dce6 minikube.k8s.io/name=cilium-20220725125923-44543 minikube.k8s.io/updated_at=2022_07_25T13_12_26_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:12:26.188515   58100 ops.go:34] apiserver oom_adj: -16
	I0725 13:12:26.264397   58100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:12:26.824167   58100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:12:28.841391   58231 api_server.go:256] stopped: https://127.0.0.1:55791/healthz: Get "https://127.0.0.1:55791/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 13:12:28.841429   58231 retry.go:31] will retry after 263.082536ms: state is "Stopped"
	I0725 13:12:29.106660   58231 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55791/healthz ...
	I0725 13:12:27.323785   58100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:12:27.824440   58100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:12:28.325906   58100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:12:28.825386   58100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:12:29.324641   58100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:12:29.823979   58100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:12:30.324732   58100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:12:30.823966   58100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:12:31.324045   58100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:12:31.825418   58100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:12:34.107603   58231 api_server.go:256] stopped: https://127.0.0.1:55791/healthz: Get "https://127.0.0.1:55791/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 13:12:34.107638   58231 retry.go:31] will retry after 381.329545ms: state is "Stopped"
	I0725 13:12:34.489042   58231 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55791/healthz ...
	I0725 13:12:32.325973   58100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:12:32.825455   58100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:12:33.324257   58100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:12:33.824249   58100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:12:34.324546   58100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:12:34.824859   58100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:12:35.324029   58100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:12:35.824073   58100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:12:36.323984   58100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:12:36.823990   58100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:12:37.324372   58100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:12:37.823976   58100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:12:38.326113   58100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:12:38.444184   58100 kubeadm.go:1045] duration metric: took 12.264611297s to wait for elevateKubeSystemPrivileges.
	I0725 13:12:38.444202   58100 kubeadm.go:397] StartCluster complete in 28.53954405s
	I0725 13:12:38.444218   58100 settings.go:142] acquiring lock: {Name:mk9b5e011e5806157f9a122f8c65a6cc16a3d9e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 13:12:38.444304   58100 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	I0725 13:12:38.445239   58100 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig: {Name:mk33e723b045d7cbb2c524ed31a7e78a4a0b6415 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 13:12:38.968806   58100 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "cilium-20220725125923-44543" rescaled to 1
	I0725 13:12:38.968845   58100 start.go:211] Will wait 5m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 13:12:38.991971   58100 out.go:177] * Verifying Kubernetes components...
	I0725 13:12:38.968854   58100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0725 13:12:38.968885   58100 addons.go:412] enableAddons start: toEnable=map[], additional=[]
	I0725 13:12:38.968991   58100 config.go:178] Loaded profile config "cilium-20220725125923-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0725 13:12:39.065457   58100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 13:12:39.065477   58100 addons.go:65] Setting storage-provisioner=true in profile "cilium-20220725125923-44543"
	I0725 13:12:39.065496   58100 addons.go:65] Setting default-storageclass=true in profile "cilium-20220725125923-44543"
	I0725 13:12:39.065503   58100 addons.go:153] Setting addon storage-provisioner=true in "cilium-20220725125923-44543"
	W0725 13:12:39.065512   58100 addons.go:162] addon storage-provisioner should already be in state true
	I0725 13:12:39.065523   58100 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cilium-20220725125923-44543"
	I0725 13:12:39.065569   58100 host.go:66] Checking if "cilium-20220725125923-44543" exists ...
	I0725 13:12:39.065852   58100 cli_runner.go:164] Run: docker container inspect cilium-20220725125923-44543 --format={{.State.Status}}
	I0725 13:12:39.066398   58100 cli_runner.go:164] Run: docker container inspect cilium-20220725125923-44543 --format={{.State.Status}}
	I0725 13:12:39.075750   58100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0725 13:12:39.092177   58100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cilium-20220725125923-44543
	I0725 13:12:39.188198   58100 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 13:12:39.225836   58100 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 13:12:39.225852   58100 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 13:12:39.225938   58100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220725125923-44543
	I0725 13:12:39.231733   58100 addons.go:153] Setting addon default-storageclass=true in "cilium-20220725125923-44543"
	W0725 13:12:39.231760   58100 addons.go:162] addon default-storageclass should already be in state true
	I0725 13:12:39.231789   58100 host.go:66] Checking if "cilium-20220725125923-44543" exists ...
	I0725 13:12:39.232328   58100 cli_runner.go:164] Run: docker container inspect cilium-20220725125923-44543 --format={{.State.Status}}
	I0725 13:12:39.247916   58100 node_ready.go:35] waiting up to 5m0s for node "cilium-20220725125923-44543" to be "Ready" ...
	I0725 13:12:39.256431   58100 node_ready.go:49] node "cilium-20220725125923-44543" has status "Ready":"True"
	I0725 13:12:39.256445   58100 node_ready.go:38] duration metric: took 8.495925ms waiting for node "cilium-20220725125923-44543" to be "Ready" ...
	I0725 13:12:39.256459   58100 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 13:12:39.272575   58100 pod_ready.go:78] waiting up to 5m0s for pod "cilium-c2t4r" in "kube-system" namespace to be "Ready" ...
	I0725 13:12:39.320331   58100 start.go:809] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0725 13:12:39.323375   58100 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56951 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/cilium-20220725125923-44543/id_rsa Username:docker}
	I0725 13:12:39.334469   58100 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 13:12:39.334480   58100 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 13:12:39.334532   58100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220725125923-44543
	I0725 13:12:39.418343   58100 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56951 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/cilium-20220725125923-44543/id_rsa Username:docker}
	I0725 13:12:39.440066   58100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 13:12:39.557043   58100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 13:12:39.933377   58100 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0725 13:12:35.922213   58231 api_server.go:266] https://127.0.0.1:55791/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0725 13:12:35.922240   58231 retry.go:31] will retry after 422.765636ms: https://127.0.0.1:55791/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0725 13:12:36.345099   58231 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55791/healthz ...
	I0725 13:12:36.350599   58231 api_server.go:266] https://127.0.0.1:55791/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 13:12:36.350622   58231 retry.go:31] will retry after 473.074753ms: https://127.0.0.1:55791/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 13:12:36.823798   58231 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55791/healthz ...
	I0725 13:12:36.830964   58231 api_server.go:266] https://127.0.0.1:55791/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 13:12:36.830990   58231 retry.go:31] will retry after 587.352751ms: https://127.0.0.1:55791/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 13:12:37.419105   58231 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55791/healthz ...
	I0725 13:12:37.424595   58231 api_server.go:266] https://127.0.0.1:55791/healthz returned 200:
	ok
	I0725 13:12:37.435790   58231 system_pods.go:86] 5 kube-system pods found
	I0725 13:12:37.435809   58231 system_pods.go:89] "etcd-kubernetes-upgrade-20220725130322-44543" [9e6a8d8f-50f0-4d31-ad0b-cab52896843c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0725 13:12:37.435818   58231 system_pods.go:89] "kube-apiserver-kubernetes-upgrade-20220725130322-44543" [2aef2e2f-e3ab-47bb-a07a-644345ca6778] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0725 13:12:37.435825   58231 system_pods.go:89] "kube-controller-manager-kubernetes-upgrade-20220725130322-44543" [88275409-1923-4399-b1b8-2c2b9580b709] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0725 13:12:37.435829   58231 system_pods.go:89] "kube-scheduler-kubernetes-upgrade-20220725130322-44543" [68e96e8d-7d2c-4cef-bad3-574605bda709] Running
	I0725 13:12:37.435834   58231 system_pods.go:89] "storage-provisioner" [eb15293c-28c4-413e-b807-dc26ff4255a1] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0725 13:12:37.435842   58231 kubeadm.go:610] needs reconfigure: missing components: kube-dns, kube-proxy
	I0725 13:12:37.435848   58231 kubeadm.go:1092] stopping kube-system containers ...
	I0725 13:12:37.435903   58231 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 13:12:37.467243   58231 docker.go:443] Stopping containers: [9b2f829fbc02 2418947aa2a4 06a104072611 9f88f2df0ebf 56260039e575 42ef5ba5713f 332902c42c60 1ef874a48b6a 1a3964b6360b ec51ae1fea02 cd31de1aee90 30458544112c c7f2cb168538 3c9ec27fabe0 2c1c44ce2f54 f23243c28469 ff14793ad4ef 3e6033fd6e0d]
	I0725 13:12:37.467327   58231 ssh_runner.go:195] Run: docker stop 9b2f829fbc02 2418947aa2a4 06a104072611 9f88f2df0ebf 56260039e575 42ef5ba5713f 332902c42c60 1ef874a48b6a 1a3964b6360b ec51ae1fea02 cd31de1aee90 30458544112c c7f2cb168538 3c9ec27fabe0 2c1c44ce2f54 f23243c28469 ff14793ad4ef 3e6033fd6e0d
	I0725 13:12:38.594702   58231 ssh_runner.go:235] Completed: docker stop 9b2f829fbc02 2418947aa2a4 06a104072611 9f88f2df0ebf 56260039e575 42ef5ba5713f 332902c42c60 1ef874a48b6a 1a3964b6360b ec51ae1fea02 cd31de1aee90 30458544112c c7f2cb168538 3c9ec27fabe0 2c1c44ce2f54 f23243c28469 ff14793ad4ef 3e6033fd6e0d: (1.127330612s)
	I0725 13:12:38.594777   58231 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 13:12:38.673801   58231 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 13:12:38.682794   58231 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jul 25 20:12 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jul 25 20:12 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2095 Jul 25 20:12 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jul 25 20:12 /etc/kubernetes/scheduler.conf
	
	I0725 13:12:38.682853   58231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 13:12:38.692844   58231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 13:12:38.701389   58231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 13:12:38.709363   58231 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:12:38.709429   58231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 13:12:38.719045   58231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 13:12:38.727062   58231 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:12:38.727115   58231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 13:12:38.734773   58231 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 13:12:38.742174   58231 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0725 13:12:38.742186   58231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:12:38.784655   58231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:12:39.726335   58231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:12:39.956113   58231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:12:40.012259   58231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:12:40.080173   58231 api_server.go:51] waiting for apiserver process to appear ...
	I0725 13:12:40.080235   58231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:12:39.974759   58100 addons.go:414] enableAddons completed in 1.005833913s
	I0725 13:12:41.293410   58100 pod_ready.go:102] pod "cilium-c2t4r" in "kube-system" namespace has status "Ready":"False"
	I0725 13:12:40.593986   58231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:12:41.094272   58231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:12:41.108040   58231 api_server.go:71] duration metric: took 1.027835347s to wait for apiserver process to appear ...
	I0725 13:12:41.108067   58231 api_server.go:87] waiting for apiserver healthz status ...
	I0725 13:12:41.108084   58231 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55791/healthz ...
	I0725 13:12:45.500275   58231 api_server.go:266] https://127.0.0.1:55791/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0725 13:12:45.500295   58231 api_server.go:102] status: https://127.0.0.1:55791/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0725 13:12:46.000486   58231 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55791/healthz ...
	I0725 13:12:46.007825   58231 api_server.go:266] https://127.0.0.1:55791/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 13:12:46.007847   58231 api_server.go:102] status: https://127.0.0.1:55791/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 13:12:46.500524   58231 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55791/healthz ...
	I0725 13:12:46.507481   58231 api_server.go:266] https://127.0.0.1:55791/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 13:12:46.507496   58231 api_server.go:102] status: https://127.0.0.1:55791/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 13:12:47.000461   58231 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55791/healthz ...
	I0725 13:12:47.007514   58231 api_server.go:266] https://127.0.0.1:55791/healthz returned 200:
	ok
	I0725 13:12:47.015278   58231 api_server.go:140] control plane version: v1.24.2
	I0725 13:12:47.015293   58231 api_server.go:130] duration metric: took 5.90710321s to wait for apiserver health ...
	I0725 13:12:47.015299   58231 cni.go:95] Creating CNI manager for ""
	I0725 13:12:47.015305   58231 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 13:12:47.015311   58231 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 13:12:47.020211   58231 system_pods.go:59] 5 kube-system pods found
	I0725 13:12:47.020224   58231 system_pods.go:61] "etcd-kubernetes-upgrade-20220725130322-44543" [9e6a8d8f-50f0-4d31-ad0b-cab52896843c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0725 13:12:47.020232   58231 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-20220725130322-44543" [2aef2e2f-e3ab-47bb-a07a-644345ca6778] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0725 13:12:47.020239   58231 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-20220725130322-44543" [88275409-1923-4399-b1b8-2c2b9580b709] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0725 13:12:47.020244   58231 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-20220725130322-44543" [68e96e8d-7d2c-4cef-bad3-574605bda709] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 13:12:47.020248   58231 system_pods.go:61] "storage-provisioner" [eb15293c-28c4-413e-b807-dc26ff4255a1] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0725 13:12:47.020252   58231 system_pods.go:74] duration metric: took 4.937279ms to wait for pod list to return data ...
	I0725 13:12:47.020259   58231 node_conditions.go:102] verifying NodePressure condition ...
	I0725 13:12:47.022957   58231 node_conditions.go:122] node storage ephemeral capacity is 107077304Ki
	I0725 13:12:47.022974   58231 node_conditions.go:123] node cpu capacity is 6
	I0725 13:12:47.022996   58231 node_conditions.go:105] duration metric: took 2.73101ms to run NodePressure ...
	I0725 13:12:47.023013   58231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:12:47.167297   58231 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 13:12:47.176210   58231 ops.go:34] apiserver oom_adj: -16
	I0725 13:12:47.176223   58231 kubeadm.go:630] restartCluster took 23.455887384s
	I0725 13:12:47.176232   58231 kubeadm.go:397] StartCluster complete in 23.498296955s
	I0725 13:12:47.176249   58231 settings.go:142] acquiring lock: {Name:mk9b5e011e5806157f9a122f8c65a6cc16a3d9e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 13:12:47.176337   58231 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	I0725 13:12:47.177041   58231 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig: {Name:mk33e723b045d7cbb2c524ed31a7e78a4a0b6415 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 13:12:47.177917   58231 kapi.go:59] client config for kubernetes-upgrade-20220725130322-44543: &rest.Config{Host:"https://127.0.0.1:55791", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kuber
netes-upgrade-20220725130322-44543/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22fcfe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0725 13:12:47.180602   58231 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kubernetes-upgrade-20220725130322-44543" rescaled to 1
	I0725 13:12:47.180643   58231 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 13:12:47.180666   58231 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0725 13:12:47.180684   58231 addons.go:412] enableAddons start: toEnable=map[default-storageclass:true storage-provisioner:true], additional=[]
	I0725 13:12:47.180857   58231 config.go:178] Loaded profile config "kubernetes-upgrade-20220725130322-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0725 13:12:47.227863   58231 out.go:177] * Verifying Kubernetes components...
	I0725 13:12:43.294077   58100 pod_ready.go:102] pod "cilium-c2t4r" in "kube-system" namespace has status "Ready":"False"
	I0725 13:12:45.789442   58100 pod_ready.go:102] pod "cilium-c2t4r" in "kube-system" namespace has status "Ready":"False"
	I0725 13:12:47.227976   58231 addons.go:65] Setting default-storageclass=true in profile "kubernetes-upgrade-20220725130322-44543"
	I0725 13:12:47.227978   58231 addons.go:65] Setting storage-provisioner=true in profile "kubernetes-upgrade-20220725130322-44543"
	I0725 13:12:47.301100   58231 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-20220725130322-44543"
	I0725 13:12:47.301112   58231 addons.go:153] Setting addon storage-provisioner=true in "kubernetes-upgrade-20220725130322-44543"
	W0725 13:12:47.301126   58231 addons.go:162] addon storage-provisioner should already be in state true
	I0725 13:12:47.301148   58231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 13:12:47.301167   58231 host.go:66] Checking if "kubernetes-upgrade-20220725130322-44543" exists ...
	I0725 13:12:47.301484   58231 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220725130322-44543 --format={{.State.Status}}
	I0725 13:12:47.301618   58231 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220725130322-44543 --format={{.State.Status}}
	I0725 13:12:47.309408   58231 start.go:789] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0725 13:12:47.319589   58231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725130322-44543
	I0725 13:12:47.431124   58231 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 13:12:47.452940   58231 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 13:12:47.452960   58231 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 13:12:47.453066   58231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725130322-44543
	I0725 13:12:47.456578   58231 kapi.go:59] client config for kubernetes-upgrade-20220725130322-44543: &rest.Config{Host:"https://127.0.0.1:55791", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kuber
netes-upgrade-20220725130322-44543/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22fcfe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0725 13:12:47.459687   58231 api_server.go:51] waiting for apiserver process to appear ...
	I0725 13:12:47.459757   58231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:12:47.464462   58231 addons.go:153] Setting addon default-storageclass=true in "kubernetes-upgrade-20220725130322-44543"
	W0725 13:12:47.464476   58231 addons.go:162] addon default-storageclass should already be in state true
	I0725 13:12:47.464497   58231 host.go:66] Checking if "kubernetes-upgrade-20220725130322-44543" exists ...
	I0725 13:12:47.464923   58231 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220725130322-44543 --format={{.State.Status}}
	I0725 13:12:47.473037   58231 api_server.go:71] duration metric: took 292.347864ms to wait for apiserver process to appear ...
	I0725 13:12:47.473063   58231 api_server.go:87] waiting for apiserver healthz status ...
	I0725 13:12:47.473074   58231 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55791/healthz ...
	I0725 13:12:47.481359   58231 api_server.go:266] https://127.0.0.1:55791/healthz returned 200:
	ok
	I0725 13:12:47.483158   58231 api_server.go:140] control plane version: v1.24.2
	I0725 13:12:47.483172   58231 api_server.go:130] duration metric: took 10.10418ms to wait for apiserver health ...
	I0725 13:12:47.483179   58231 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 13:12:47.489092   58231 system_pods.go:59] 5 kube-system pods found
	I0725 13:12:47.489118   58231 system_pods.go:61] "etcd-kubernetes-upgrade-20220725130322-44543" [9e6a8d8f-50f0-4d31-ad0b-cab52896843c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0725 13:12:47.489133   58231 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-20220725130322-44543" [2aef2e2f-e3ab-47bb-a07a-644345ca6778] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0725 13:12:47.489143   58231 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-20220725130322-44543" [88275409-1923-4399-b1b8-2c2b9580b709] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0725 13:12:47.489149   58231 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-20220725130322-44543" [68e96e8d-7d2c-4cef-bad3-574605bda709] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 13:12:47.489158   58231 system_pods.go:61] "storage-provisioner" [eb15293c-28c4-413e-b807-dc26ff4255a1] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0725 13:12:47.489162   58231 system_pods.go:74] duration metric: took 5.98006ms to wait for pod list to return data ...
	I0725 13:12:47.489169   58231 kubeadm.go:572] duration metric: took 308.505397ms to wait for : map[apiserver:true system_pods:true] ...
	I0725 13:12:47.489177   58231 node_conditions.go:102] verifying NodePressure condition ...
	I0725 13:12:47.493140   58231 node_conditions.go:122] node storage ephemeral capacity is 107077304Ki
	I0725 13:12:47.493159   58231 node_conditions.go:123] node cpu capacity is 6
	I0725 13:12:47.493168   58231 node_conditions.go:105] duration metric: took 3.987906ms to run NodePressure ...
	I0725 13:12:47.493184   58231 start.go:216] waiting for startup goroutines ...
	I0725 13:12:47.546808   58231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55792 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/kubernetes-upgrade-20220725130322-44543/id_rsa Username:docker}
	I0725 13:12:47.553984   58231 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 13:12:47.553999   58231 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 13:12:47.554080   58231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725130322-44543
	I0725 13:12:47.638895   58231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55792 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/kubernetes-upgrade-20220725130322-44543/id_rsa Username:docker}
	I0725 13:12:47.657508   58231 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 13:12:47.766778   58231 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 13:12:48.355089   58231 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0725 13:12:48.392908   58231 addons.go:414] enableAddons completed in 1.212177536s
	I0725 13:12:48.430192   58231 start.go:506] kubectl: 1.24.1, cluster: 1.24.2 (minor skew: 0)
	I0725 13:12:48.467835   58231 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-20220725130322-44543" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Mon 2022-07-25 20:07:40 UTC, end at Mon 2022-07-25 20:12:50 UTC. --
	Jul 25 20:12:21 kubernetes-upgrade-20220725130322-44543 dockerd[10153]: time="2022-07-25T20:12:21.401447432Z" level=info msg="Loading containers: start."
	Jul 25 20:12:21 kubernetes-upgrade-20220725130322-44543 dockerd[10153]: time="2022-07-25T20:12:21.508925474Z" level=info msg="ignoring event" container=cd31de1aee90be676ad94759bd4a5d7ba849db968995392c3bce9b952cb645f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:12:21 kubernetes-upgrade-20220725130322-44543 dockerd[10153]: time="2022-07-25T20:12:21.509443817Z" level=info msg="ignoring event" container=1a3964b6360b5c62565605559f49ecb094fb0b99e323315fec9640091978055c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:12:21 kubernetes-upgrade-20220725130322-44543 dockerd[10153]: time="2022-07-25T20:12:21.511058600Z" level=info msg="ignoring event" container=ec51ae1fea02d4c263645331d2c637722afc7b7938b06b6debe00c42cea16d04 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:12:21 kubernetes-upgrade-20220725130322-44543 dockerd[10153]: time="2022-07-25T20:12:21.511984870Z" level=info msg="ignoring event" container=30458544112c65a5316067fb84981a4768f5345692a9c21edd35ac12ee3b3ec4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:12:21 kubernetes-upgrade-20220725130322-44543 dockerd[10153]: time="2022-07-25T20:12:21.719417478Z" level=info msg="Removing stale sandbox 344d1d63139f6ee93ae262a5b40e62f1919cf19f58504b817ec0155759b4a82c (1ef874a48b6a55f904444088a1b60c3988280c62f87e71a3b7888981b61b0522)"
	Jul 25 20:12:21 kubernetes-upgrade-20220725130322-44543 dockerd[10153]: time="2022-07-25T20:12:21.720883286Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 39aefd24ff1c2c0af0dfb8ee747592d6d562158a8acf92f4d22cc1c084860a15 511967605c770d4db2a8e28448e0452a78645f6a9a12f6179f5a6f9bb2d95248], retrying...."
	Jul 25 20:12:21 kubernetes-upgrade-20220725130322-44543 dockerd[10153]: time="2022-07-25T20:12:21.806031687Z" level=info msg="Removing stale sandbox 50f7a0b971c5285fd3816ca2bb40dbaf2900f578ed8a00e0da6ed2153a1526bf (cd31de1aee90be676ad94759bd4a5d7ba849db968995392c3bce9b952cb645f2)"
	Jul 25 20:12:21 kubernetes-upgrade-20220725130322-44543 dockerd[10153]: time="2022-07-25T20:12:21.807408771Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 39aefd24ff1c2c0af0dfb8ee747592d6d562158a8acf92f4d22cc1c084860a15 685b2d56c5da3b9cdc5f61656c0e8314ca885f358daf469d1b43eefee73e4959], retrying...."
	Jul 25 20:12:21 kubernetes-upgrade-20220725130322-44543 dockerd[10153]: time="2022-07-25T20:12:21.909735911Z" level=info msg="Removing stale sandbox c4180157443cfb99cacfefa394575c09b741ad3f0767a833b48fae7c534c7b9e (30458544112c65a5316067fb84981a4768f5345692a9c21edd35ac12ee3b3ec4)"
	Jul 25 20:12:21 kubernetes-upgrade-20220725130322-44543 dockerd[10153]: time="2022-07-25T20:12:21.911105975Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 39aefd24ff1c2c0af0dfb8ee747592d6d562158a8acf92f4d22cc1c084860a15 c9198a7278eebe0af0ccef3d40e1c209bf05bc3a4dc7394f8afe6e1a451ec6af], retrying...."
	Jul 25 20:12:21 kubernetes-upgrade-20220725130322-44543 dockerd[10153]: time="2022-07-25T20:12:21.935418989Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 25 20:12:21 kubernetes-upgrade-20220725130322-44543 dockerd[10153]: time="2022-07-25T20:12:21.970455017Z" level=info msg="Loading containers: done."
	Jul 25 20:12:21 kubernetes-upgrade-20220725130322-44543 dockerd[10153]: time="2022-07-25T20:12:21.981354589Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
	Jul 25 20:12:21 kubernetes-upgrade-20220725130322-44543 dockerd[10153]: time="2022-07-25T20:12:21.981498337Z" level=info msg="Daemon has completed initialization"
	Jul 25 20:12:22 kubernetes-upgrade-20220725130322-44543 systemd[1]: Started Docker Application Container Engine.
	Jul 25 20:12:22 kubernetes-upgrade-20220725130322-44543 dockerd[10153]: time="2022-07-25T20:12:22.008937014Z" level=info msg="API listen on [::]:2376"
	Jul 25 20:12:22 kubernetes-upgrade-20220725130322-44543 dockerd[10153]: time="2022-07-25T20:12:22.022335792Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 25 20:12:37 kubernetes-upgrade-20220725130322-44543 dockerd[10153]: time="2022-07-25T20:12:37.568542824Z" level=info msg="ignoring event" container=42ef5ba5713fb897e43da97396923bcb05acff10340e3301250b5a876c4aa7be module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:12:37 kubernetes-upgrade-20220725130322-44543 dockerd[10153]: time="2022-07-25T20:12:37.573445268Z" level=info msg="ignoring event" container=9f88f2df0ebfe7511b6f9eff27feecf114a1aa98be3954444511e2839a0b42ec module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:12:37 kubernetes-upgrade-20220725130322-44543 dockerd[10153]: time="2022-07-25T20:12:37.574703867Z" level=info msg="ignoring event" container=2418947aa2a470dcdddf4a5e5a69e2c91e98a9cdaeb118739e42b01080de3ce0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:12:37 kubernetes-upgrade-20220725130322-44543 dockerd[10153]: time="2022-07-25T20:12:37.579344602Z" level=info msg="ignoring event" container=56260039e5753c499b72d14432d3c55b0c4f8a1ea5cbee239c252d832374830c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:12:37 kubernetes-upgrade-20220725130322-44543 dockerd[10153]: time="2022-07-25T20:12:37.583591169Z" level=info msg="ignoring event" container=332902c42c60de800c8a9fc4a5a5bea516a5d063f8bac23e5994b7307bd09755 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:12:37 kubernetes-upgrade-20220725130322-44543 dockerd[10153]: time="2022-07-25T20:12:37.587879050Z" level=info msg="ignoring event" container=9b2f829fbc02369a81d8158230db3b5c3367012bac88d95078b61dd49bd75bb6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:12:38 kubernetes-upgrade-20220725130322-44543 dockerd[10153]: time="2022-07-25T20:12:38.534209542Z" level=info msg="ignoring event" container=06a1040726118e46244c5853a54b4e644101595c0680aa9bdb3711289bbaadee module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	24c6d5cc994d7       aebe758cef4cd       9 seconds ago       Running             etcd                      3                   b54386d32a360
	4054bbfea0c81       5d725196c1f47       10 seconds ago      Running             kube-scheduler            2                   138de81a094ef
	5b872886ce68e       34cdf99b1bb3b       10 seconds ago      Running             kube-controller-manager   2                   0963082d9a40e
	9a71f3e82f45f       d3377ffb7177c       10 seconds ago      Running             kube-apiserver            2                   772c18f68eaa3
	9b2f829fbc023       aebe758cef4cd       18 seconds ago      Exited              etcd                      2                   42ef5ba5713fb
	2418947aa2a47       5d725196c1f47       28 seconds ago      Exited              kube-scheduler            1                   9f88f2df0ebfe
	06a1040726118       d3377ffb7177c       28 seconds ago      Exited              kube-apiserver            1                   332902c42c60d
	ec51ae1fea02d       34cdf99b1bb3b       30 seconds ago      Exited              kube-controller-manager   1                   cd31de1aee90b
	
	* 
	* ==> describe nodes <==
	* Name:               kubernetes-upgrade-20220725130322-44543
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-20220725130322-44543
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a5b59bcfc16aadb787d3d4f0635e06172b98dce6
	                    minikube.k8s.io/name=kubernetes-upgrade-20220725130322-44543
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_07_25T13_12_13_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 25 Jul 2022 20:12:09 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-20220725130322-44543
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 25 Jul 2022 20:12:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 25 Jul 2022 20:12:45 +0000   Mon, 25 Jul 2022 20:12:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 25 Jul 2022 20:12:45 +0000   Mon, 25 Jul 2022 20:12:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 25 Jul 2022 20:12:45 +0000   Mon, 25 Jul 2022 20:12:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 25 Jul 2022 20:12:45 +0000   Mon, 25 Jul 2022 20:12:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    kubernetes-upgrade-20220725130322-44543
	Capacity:
	  cpu:                6
	  ephemeral-storage:  107077304Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  107077304Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 855c6c72c86b4657b3d8c3c774fd7e1d
	  System UUID:                f436c8cc-fffb-402c-83a4-76d1d178617c
	  Boot ID:                    f0b8333d-7eba-4eec-b9ed-6046a2426c0d
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.2
	  Kube-Proxy Version:         v1.24.2
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                               CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                               ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-20220725130322-44543                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         38s
	  kube-system                 kube-apiserver-kubernetes-upgrade-20220725130322-44543             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-20220725130322-44543    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38s
	  kube-system                 kube-scheduler-kubernetes-upgrade-20220725130322-44543             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (10%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (1%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  Starting                 39s                kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  38s                kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  38s                kubelet  Node kubernetes-upgrade-20220725130322-44543 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    38s                kubelet  Node kubernetes-upgrade-20220725130322-44543 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38s                kubelet  Node kubernetes-upgrade-20220725130322-44543 status is now: NodeHasSufficientPID
	  Normal  Starting                 11s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  11s (x9 over 11s)  kubelet  Node kubernetes-upgrade-20220725130322-44543 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11s (x7 over 11s)  kubelet  Node kubernetes-upgrade-20220725130322-44543 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11s (x7 over 11s)  kubelet  Node kubernetes-upgrade-20220725130322-44543 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11s                kubelet  Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [24c6d5cc994d] <==
	* {"level":"info","ts":"2022-07-25T20:12:41.870Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"ea7e25599daad906","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2022-07-25T20:12:41.872Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2022-07-25T20:12:41.872Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2022-07-25T20:12:41.872Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2022-07-25T20:12:41.872Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T20:12:41.872Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T20:12:41.873Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-07-25T20:12:41.873Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-07-25T20:12:41.873Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-25T20:12:41.873Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-25T20:12:41.873Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-07-25T20:12:43.458Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 3"}
	{"level":"info","ts":"2022-07-25T20:12:43.458Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 3"}
	{"level":"info","ts":"2022-07-25T20:12:43.458Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2022-07-25T20:12:43.458Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 4"}
	{"level":"info","ts":"2022-07-25T20:12:43.458Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2022-07-25T20:12:43.458Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 4"}
	{"level":"info","ts":"2022-07-25T20:12:43.458Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2022-07-25T20:12:43.458Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:kubernetes-upgrade-20220725130322-44543 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-25T20:12:43.459Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-25T20:12:43.459Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-25T20:12:43.459Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-25T20:12:43.459Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-07-25T20:12:43.460Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-07-25T20:12:43.460Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	
	* 
	* ==> etcd [9b2f829fbc02] <==
	* {"level":"info","ts":"2022-07-25T20:12:32.481Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-25T20:12:32.482Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-07-25T20:12:32.482Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-07-25T20:12:34.179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2022-07-25T20:12:34.179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2022-07-25T20:12:34.179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-07-25T20:12:34.179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2022-07-25T20:12:34.179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2022-07-25T20:12:34.179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2022-07-25T20:12:34.179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2022-07-25T20:12:34.180Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:kubernetes-upgrade-20220725130322-44543 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-25T20:12:34.180Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-25T20:12:34.180Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-25T20:12:34.180Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-25T20:12:34.180Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-07-25T20:12:34.182Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2022-07-25T20:12:34.182Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-07-25T20:12:37.521Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-07-25T20:12:37.521Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"kubernetes-upgrade-20220725130322-44543","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	WARNING: 2022/07/25 20:12:37 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/07/25 20:12:37 [core] grpc: addrConn.createTransport failed to connect to {192.168.76.2:2379 192.168.76.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.76.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-07-25T20:12:37.528Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2022-07-25T20:12:37.534Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-25T20:12:37.535Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-25T20:12:37.535Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"kubernetes-upgrade-20220725130322-44543","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	* 
	* ==> kernel <==
	*  20:12:51 up 54 min,  0 users,  load average: 1.92, 1.30, 1.02
	Linux kubernetes-upgrade-20220725130322-44543 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [06a104072611] <==
	* W0725 20:12:37.525347       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:12:37.525372       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:12:37.525431       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:12:37.525446       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:12:37.525456       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:12:37.525469       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:12:37.525477       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:12:37.525533       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:12:37.525560       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:12:37.525432       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:12:37.525629       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:12:37.525651       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:12:37.525671       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:12:37.525688       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:12:37.525708       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:12:37.525762       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:12:37.525780       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:12:37.525785       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:12:37.525845       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:12:37.525855       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:12:37.525878       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:12:37.525932       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:12:37.526294       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	I0725 20:12:37.557456       1 object_count_tracker.go:84] "StorageObjectCountTracker pruner is exiting"
	I0725 20:12:37.557462       1 controller.go:198] Shutting down kubernetes service endpoint reconciler
	
	* 
	* ==> kube-apiserver [9a71f3e82f45] <==
	* I0725 20:12:45.510317       1 establishing_controller.go:76] Starting EstablishingController
	I0725 20:12:45.510325       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0725 20:12:45.510333       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0725 20:12:45.510341       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0725 20:12:45.518932       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0725 20:12:45.519058       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0725 20:12:45.534132       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0725 20:12:45.534834       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E0725 20:12:45.586745       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0725 20:12:45.592732       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0725 20:12:45.597216       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0725 20:12:45.597763       1 cache.go:39] Caches are synced for autoregister controller
	I0725 20:12:45.599517       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0725 20:12:45.599573       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0725 20:12:45.599564       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0725 20:12:45.619544       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0725 20:12:45.636251       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0725 20:12:45.670543       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0725 20:12:46.247130       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0725 20:12:46.501223       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0725 20:12:47.122799       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0725 20:12:47.130034       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0725 20:12:47.155946       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0725 20:12:47.167799       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0725 20:12:47.172452       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-controller-manager [5b872886ce68] <==
	* I0725 20:12:50.391861       1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for podtemplates
	I0725 20:12:50.391899       1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for events.events.k8s.io
	I0725 20:12:50.391911       1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io
	I0725 20:12:50.391919       1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io
	W0725 20:12:50.392037       1 shared_informer.go:533] resyncPeriod 17h34m18.210150482s is smaller than resyncCheckPeriod 18h22m1.843130409s and the informer has already started. Changing it to 18h22m1.843130409s
	I0725 20:12:50.392630       1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for serviceaccounts
	I0725 20:12:50.392890       1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for controllerrevisions.apps
	I0725 20:12:50.392991       1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for replicasets.apps
	I0725 20:12:50.393139       1 controllermanager.go:593] Started "resourcequota"
	I0725 20:12:50.393391       1 resource_quota_controller.go:273] Starting resource quota controller
	I0725 20:12:50.393435       1 shared_informer.go:255] Waiting for caches to sync for resource quota
	I0725 20:12:50.393455       1 resource_quota_monitor.go:308] QuotaMonitor running
	I0725 20:12:50.545422       1 controllermanager.go:593] Started "replicaset"
	I0725 20:12:50.545447       1 replica_set.go:205] Starting replicaset controller
	I0725 20:12:50.545458       1 shared_informer.go:255] Waiting for caches to sync for ReplicaSet
	I0725 20:12:50.762343       1 controllermanager.go:593] Started "root-ca-cert-publisher"
	I0725 20:12:50.762561       1 publisher.go:107] Starting root CA certificate configmap publisher
	I0725 20:12:50.762587       1 shared_informer.go:255] Waiting for caches to sync for crt configmap
	I0725 20:12:50.766671       1 controllermanager.go:593] Started "endpointslice"
	I0725 20:12:50.766702       1 endpointslice_controller.go:257] Starting endpoint slice controller
	I0725 20:12:50.766711       1 shared_informer.go:255] Waiting for caches to sync for endpoint_slice
	I0725 20:12:50.890001       1 controllermanager.go:593] Started "endpointslicemirroring"
	I0725 20:12:50.890032       1 endpointslicemirroring_controller.go:212] Starting EndpointSliceMirroring controller
	I0725 20:12:50.890041       1 shared_informer.go:255] Waiting for caches to sync for endpoint_slice_mirroring
	I0725 20:12:50.921584       1 node_ipam_controller.go:91] Sending events to api server.
	
	* 
	* ==> kube-controller-manager [ec51ae1fea02] <==
	* I0725 20:12:20.826415       1 serving.go:348] Generated self-signed cert in-memory
	I0725 20:12:21.391904       1 controllermanager.go:180] Version: v1.24.2
	I0725 20:12:21.391945       1 controllermanager.go:182] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 20:12:21.392803       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0725 20:12:21.393012       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0725 20:12:21.393138       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0725 20:12:21.392784       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	
	* 
	* ==> kube-scheduler [2418947aa2a4] <==
	* W0725 20:12:35.976180       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0725 20:12:35.976490       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0725 20:12:35.976758       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0725 20:12:35.976963       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0725 20:12:35.976418       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0725 20:12:35.977108       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0725 20:12:35.977136       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0725 20:12:35.977148       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0725 20:12:35.977794       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0725 20:12:35.977831       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0725 20:12:35.977829       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0725 20:12:35.977846       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0725 20:12:35.977864       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0725 20:12:35.977866       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0725 20:12:35.977875       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0725 20:12:35.977888       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0725 20:12:35.978605       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0725 20:12:35.978605       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0725 20:12:35.977916       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0725 20:12:35.978617       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0725 20:12:35.978564       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0725 20:12:35.978624       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0725 20:12:36.932828       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0725 20:12:37.528831       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0725 20:12:37.529293       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [4054bbfea0c8] <==
	* I0725 20:12:41.829752       1 serving.go:348] Generated self-signed cert in-memory
	W0725 20:12:45.555300       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0725 20:12:45.557120       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0725 20:12:45.557188       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0725 20:12:45.557204       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0725 20:12:45.566966       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.2"
	I0725 20:12:45.567006       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 20:12:45.568029       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0725 20:12:45.568071       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0725 20:12:45.568061       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0725 20:12:45.568086       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0725 20:12:45.668789       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2022-07-25 20:07:40 UTC, end at Mon 2022-07-25 20:12:53 UTC. --
	Jul 25 20:12:43 kubernetes-upgrade-20220725130322-44543 kubelet[11615]: E0725 20:12:43.492648   11615 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220725130322-44543\" not found"
	Jul 25 20:12:43 kubernetes-upgrade-20220725130322-44543 kubelet[11615]: E0725 20:12:43.593892   11615 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220725130322-44543\" not found"
	Jul 25 20:12:43 kubernetes-upgrade-20220725130322-44543 kubelet[11615]: E0725 20:12:43.694367   11615 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220725130322-44543\" not found"
	Jul 25 20:12:43 kubernetes-upgrade-20220725130322-44543 kubelet[11615]: E0725 20:12:43.794859   11615 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220725130322-44543\" not found"
	Jul 25 20:12:43 kubernetes-upgrade-20220725130322-44543 kubelet[11615]: E0725 20:12:43.895371   11615 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220725130322-44543\" not found"
	Jul 25 20:12:43 kubernetes-upgrade-20220725130322-44543 kubelet[11615]: E0725 20:12:43.996022   11615 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220725130322-44543\" not found"
	Jul 25 20:12:44 kubernetes-upgrade-20220725130322-44543 kubelet[11615]: E0725 20:12:44.096171   11615 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220725130322-44543\" not found"
	Jul 25 20:12:44 kubernetes-upgrade-20220725130322-44543 kubelet[11615]: E0725 20:12:44.196838   11615 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220725130322-44543\" not found"
	Jul 25 20:12:44 kubernetes-upgrade-20220725130322-44543 kubelet[11615]: E0725 20:12:44.298041   11615 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220725130322-44543\" not found"
	Jul 25 20:12:44 kubernetes-upgrade-20220725130322-44543 kubelet[11615]: E0725 20:12:44.398933   11615 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220725130322-44543\" not found"
	Jul 25 20:12:44 kubernetes-upgrade-20220725130322-44543 kubelet[11615]: E0725 20:12:44.500292   11615 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220725130322-44543\" not found"
	Jul 25 20:12:44 kubernetes-upgrade-20220725130322-44543 kubelet[11615]: E0725 20:12:44.600662   11615 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220725130322-44543\" not found"
	Jul 25 20:12:44 kubernetes-upgrade-20220725130322-44543 kubelet[11615]: E0725 20:12:44.701167   11615 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220725130322-44543\" not found"
	Jul 25 20:12:44 kubernetes-upgrade-20220725130322-44543 kubelet[11615]: E0725 20:12:44.801493   11615 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220725130322-44543\" not found"
	Jul 25 20:12:44 kubernetes-upgrade-20220725130322-44543 kubelet[11615]: E0725 20:12:44.902651   11615 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220725130322-44543\" not found"
	Jul 25 20:12:45 kubernetes-upgrade-20220725130322-44543 kubelet[11615]: E0725 20:12:45.003390   11615 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220725130322-44543\" not found"
	Jul 25 20:12:45 kubernetes-upgrade-20220725130322-44543 kubelet[11615]: E0725 20:12:45.104289   11615 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220725130322-44543\" not found"
	Jul 25 20:12:45 kubernetes-upgrade-20220725130322-44543 kubelet[11615]: E0725 20:12:45.205783   11615 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220725130322-44543\" not found"
	Jul 25 20:12:45 kubernetes-upgrade-20220725130322-44543 kubelet[11615]: E0725 20:12:45.306173   11615 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220725130322-44543\" not found"
	Jul 25 20:12:45 kubernetes-upgrade-20220725130322-44543 kubelet[11615]: E0725 20:12:45.407699   11615 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220725130322-44543\" not found"
	Jul 25 20:12:45 kubernetes-upgrade-20220725130322-44543 kubelet[11615]: E0725 20:12:45.507882   11615 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220725130322-44543\" not found"
	Jul 25 20:12:45 kubernetes-upgrade-20220725130322-44543 kubelet[11615]: I0725 20:12:45.621200   11615 kubelet_node_status.go:108] "Node was previously registered" node="kubernetes-upgrade-20220725130322-44543"
	Jul 25 20:12:45 kubernetes-upgrade-20220725130322-44543 kubelet[11615]: I0725 20:12:45.621300   11615 kubelet_node_status.go:73] "Successfully registered node" node="kubernetes-upgrade-20220725130322-44543"
	Jul 25 20:12:46 kubernetes-upgrade-20220725130322-44543 kubelet[11615]: I0725 20:12:46.058673   11615 apiserver.go:52] "Watching apiserver"
	Jul 25 20:12:46 kubernetes-upgrade-20220725130322-44543 kubelet[11615]: I0725 20:12:46.207619   11615 reconciler.go:157] "Reconciler: start to sync state"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-20220725130322-44543 -n kubernetes-upgrade-20220725130322-44543
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-20220725130322-44543 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Done: kubectl --context kubernetes-upgrade-20220725130322-44543 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: (1.508189221s)
helpers_test.go:270: non-running pods: storage-provisioner
helpers_test.go:272: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context kubernetes-upgrade-20220725130322-44543 describe pod storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-20220725130322-44543 describe pod storage-provisioner: exit status 1 (51.709464ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context kubernetes-upgrade-20220725130322-44543 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-20220725130322-44543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-20220725130322-44543
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-20220725130322-44543: (2.990800103s)
--- FAIL: TestKubernetesUpgrade (576.12s)

                                                
                                    
x
+
TestMissingContainerUpgrade (50.89s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.694297447.exe start -p missing-upgrade-20220725130231-44543 --memory=2200 --driver=docker 
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.694297447.exe start -p missing-upgrade-20220725130231-44543 --memory=2200 --driver=docker : exit status 78 (35.574845289s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20220725130231-44543] minikube v1.9.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14555
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node m01 in cluster missing-upgrade-20220725130231-44543
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* Deleting "missing-upgrade-20220725130231-44543" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 12.84 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 32.77 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 54.59 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 69.44 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 90.25 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 108.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 125.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 137.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 159.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 182.27 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 204.50 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 225.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 247.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 268.97 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 284.50 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 304.20 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 323.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 344.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 365.36 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 387.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 409.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 432.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 452.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 474.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 496.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 516.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 526.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-07-25 20:02:48.928087559 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* [DOCKER_RESTART_FAILED] Failed to start docker container. "minikube start -p missing-upgrade-20220725130231-44543" may fix it. creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-07-25 20:03:05.600187622 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Suggestion: Remove the incompatible --docker-opt flag if one was provided
	* Related issue: https://github.com/kubernetes/minikube/issues/7070

                                                
                                                
** /stderr **
version_upgrade_test.go:316: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.694297447.exe start -p missing-upgrade-20220725130231-44543 --memory=2200 --driver=docker 
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.694297447.exe start -p missing-upgrade-20220725130231-44543 --memory=2200 --driver=docker : exit status 70 (4.450215113s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20220725130231-44543] minikube v1.9.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14555
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-20220725130231-44543
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-20220725130231-44543" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:316: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.694297447.exe start -p missing-upgrade-20220725130231-44543 --memory=2200 --driver=docker 
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.694297447.exe start -p missing-upgrade-20220725130231-44543 --memory=2200 --driver=docker : exit status 70 (4.120748429s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20220725130231-44543] minikube v1.9.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14555
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-20220725130231-44543
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-20220725130231-44543" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:322: release start failed: exit status 70
panic.go:482: *** TestMissingContainerUpgrade FAILED at 2022-07-25 13:03:19.386661 -0700 PDT m=+2734.825682554
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-20220725130231-44543
helpers_test.go:235: (dbg) docker inspect missing-upgrade-20220725130231-44543:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3f2a2bf7106e9c233837e5e0213c2ab591263c46abb811991c70e2a16dc88ba6",
	        "Created": "2022-07-25T20:02:57.124289615Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 147027,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-25T20:02:57.351884705Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/3f2a2bf7106e9c233837e5e0213c2ab591263c46abb811991c70e2a16dc88ba6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3f2a2bf7106e9c233837e5e0213c2ab591263c46abb811991c70e2a16dc88ba6/hostname",
	        "HostsPath": "/var/lib/docker/containers/3f2a2bf7106e9c233837e5e0213c2ab591263c46abb811991c70e2a16dc88ba6/hosts",
	        "LogPath": "/var/lib/docker/containers/3f2a2bf7106e9c233837e5e0213c2ab591263c46abb811991c70e2a16dc88ba6/3f2a2bf7106e9c233837e5e0213c2ab591263c46abb811991c70e2a16dc88ba6-json.log",
	        "Name": "/missing-upgrade-20220725130231-44543",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-20220725130231-44543:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4be1fd0f3e059bf964f1d19a50924b02b26bc67a65ab19000a2162b8c706cb32-init/diff:/var/lib/docker/overlay2/a93cd23c8b0acaf8fcced47c67e70f7a5e31c1e6373b4df9e8bfa8813efc5786/diff:/var/lib/docker/overlay2/0e92e1062a74cc932aab39fbb3ebc6ec8932495d723aa1f558dc1e7b61884cf1/diff:/var/lib/docker/overlay2/7f97644114605067dee71ba287cb6dcc6ac132c45670b05ab1096fba45f0f23d/diff:/var/lib/docker/overlay2/40d061e8d727c4270315a31be5d582b45d0850a15062f0c0c7351ca044ee11a4/diff:/var/lib/docker/overlay2/8c5a7d2c367dd2227945df13adc227447c4ca216af2f23656fde43dc2f11e791/diff:/var/lib/docker/overlay2/f047fec38b8395d9c86c11e004b410acaaae64afd90eb5bdc0b132ca680b1934/diff:/var/lib/docker/overlay2/f1557fb2a8002ba48e9c8f0879a0e068fa68ac48cb7f5016dfbda99fe0f32445/diff:/var/lib/docker/overlay2/0a755a82ef1cc3eb08218b3353f260c251a5dcdc32c2edd6e782a134e1efc430/diff:/var/lib/docker/overlay2/a3822357b1c58b7b72c0f0dfef6b77accae88438d2038cc08c7e76bb8293bff0/diff:/var/lib/docker/overlay2/e18fca
943054792455ca6262bb8a9150f288cac9a8e4141ba1410ca60dcccc98/diff:/var/lib/docker/overlay2/8ef76a523face9af34614e010e76ebb24c95553a918a349e5e67034e66136c01/diff:/var/lib/docker/overlay2/b15cb0911b9d281bba423abe79db5d01db85688003a73da86608923400e80389/diff:/var/lib/docker/overlay2/7c7d6e5ea166308d307eaf0bec66920f760683014e0277775dda7eab4ccec54b/diff:/var/lib/docker/overlay2/2f66f09e8477227550b4f4206baa8b06d51d7b4cac1f77ca77b73ba0a5f3fd74/diff:/var/lib/docker/overlay2/58e6a533380f3f5a7ea17a4ad175b53fba655e5eeb5d5dc645ce466b0c66a721/diff:/var/lib/docker/overlay2/f12e60ed4b2ddca167074f613e5f43ca27b1000a08831b029d91af2ccecb297c/diff:/var/lib/docker/overlay2/4b3655cac870ba13905b599d127c2cc5ca6890888cd04ad05c438c614ad66713/diff:/var/lib/docker/overlay2/527cd79c63316f8f8118fbb9a009fa7e1f6573946ee772dcc2560650d2303163/diff:/var/lib/docker/overlay2/a715635447391d038388216af45064ed801e1525d51b5797d94ff2d1473bf61c/diff:/var/lib/docker/overlay2/2f071347721a6fe87069b2a4d2964abd4df22ad1bd9e9df5165d8aad8e72c50f/diff:/var/lib/d
ocker/overlay2/f2a14fbca645cb1f4fdfb914ee9e6ec382777d2febaa55b6752fc402a52b6b47/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4be1fd0f3e059bf964f1d19a50924b02b26bc67a65ab19000a2162b8c706cb32/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4be1fd0f3e059bf964f1d19a50924b02b26bc67a65ab19000a2162b8c706cb32/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4be1fd0f3e059bf964f1d19a50924b02b26bc67a65ab19000a2162b8c706cb32/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-20220725130231-44543",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-20220725130231-44543/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-20220725130231-44543",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-20220725130231-44543",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-20220725130231-44543",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9b15dfe951d95a7e2eb522c532d230504f3f22e179001dd07428952ddf01f688",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54869"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54870"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54871"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/9b15dfe951d9",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "97d0721eff97a13a105ae1ff51acf23c47e4ec8dfd296559b2867627edefc499",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "58f47bf284f1e7ae0089ef140c63bd0490c46f229b6b6bfbb4dd62a5554dfbc3",
	                    "EndpointID": "97d0721eff97a13a105ae1ff51acf23c47e4ec8dfd296559b2867627edefc499",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-20220725130231-44543 -n missing-upgrade-20220725130231-44543
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-20220725130231-44543 -n missing-upgrade-20220725130231-44543: exit status 6 (430.15414ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 13:03:19.877728   55995 status.go:413] kubeconfig endpoint: extract IP: "missing-upgrade-20220725130231-44543" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-20220725130231-44543" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-20220725130231-44543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p missing-upgrade-20220725130231-44543
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p missing-upgrade-20220725130231-44543: (2.446264824s)
--- FAIL: TestMissingContainerUpgrade (50.89s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (46.55s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.4024834793.exe start -p stopped-upgrade-20220725130445-44543 --memory=2200 --vm-driver=docker 
E0725 13:05:17.917935   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/skaffold-20220725125803-44543/client.crt: no such file or directory
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.4024834793.exe start -p stopped-upgrade-20220725130445-44543 --memory=2200 --vm-driver=docker : exit status 70 (35.049754379s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20220725130445-44543] minikube v1.9.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14555
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig3262927355
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-07-25 20:05:03.217392461 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "stopped-upgrade-20220725130445-44543" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-07-25 20:05:19.828393631 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p stopped-upgrade-20220725130445-44543", then "minikube start -p stopped-upgrade-20220725130445-44543 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 17.22 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 38.59 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 59.11 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 81.23 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 101.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 123.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 145.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 167.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 182.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 203.26 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 224.87 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 245.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 267.37 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 289.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 310.87 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 332.86 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 354.86 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 377.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 398.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 417.26 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 439.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 460.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 482.76 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 504.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 525.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-07-25 20:05:19.828393631 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:190: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.4024834793.exe start -p stopped-upgrade-20220725130445-44543 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.4024834793.exe start -p stopped-upgrade-20220725130445-44543 --memory=2200 --vm-driver=docker : exit status 70 (4.504532805s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20220725130445-44543] minikube v1.9.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14555
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig3821317972
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-20220725130445-44543" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:190: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.4024834793.exe start -p stopped-upgrade-20220725130445-44543 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.4024834793.exe start -p stopped-upgrade-20220725130445-44543 --memory=2200 --vm-driver=docker : exit status 70 (4.513462877s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20220725130445-44543] minikube v1.9.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14555
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig1972290349
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-20220725130445-44543" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:196: legacy v1.9.0 start failed: exit status 70
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (46.55s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (61.6s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-20220725130540-44543 --output=json --layout=cluster

                                                
                                                
=== CONT  TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-20220725130540-44543 --output=json --layout=cluster: exit status 2 (16.10272371s)

                                                
                                                
-- stdout --
	{"Name":"pause-20220725130540-44543","StatusCode":405,"StatusName":"Stopped","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.26.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20220725130540-44543","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
pause_test.go:200: incorrect status code: 405
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/VerifyStatus]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20220725130540-44543
helpers_test.go:235: (dbg) docker inspect pause-20220725130540-44543:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "453a9dfc9e55926b8931bfe67d42df2b63551ff2809a59c012e588b0e24e33f2",
	        "Created": "2022-07-25T20:05:46.843639573Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 157846,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-25T20:05:47.147519958Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/453a9dfc9e55926b8931bfe67d42df2b63551ff2809a59c012e588b0e24e33f2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/453a9dfc9e55926b8931bfe67d42df2b63551ff2809a59c012e588b0e24e33f2/hostname",
	        "HostsPath": "/var/lib/docker/containers/453a9dfc9e55926b8931bfe67d42df2b63551ff2809a59c012e588b0e24e33f2/hosts",
	        "LogPath": "/var/lib/docker/containers/453a9dfc9e55926b8931bfe67d42df2b63551ff2809a59c012e588b0e24e33f2/453a9dfc9e55926b8931bfe67d42df2b63551ff2809a59c012e588b0e24e33f2-json.log",
	        "Name": "/pause-20220725130540-44543",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-20220725130540-44543:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20220725130540-44543",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2aa41a88cdde5c467bdfa45b59d566828bc8d20d6d97e62f0aabe0c340ec65cd-init/diff:/var/lib/docker/overlay2/d54b85ebfe416686b2558e2807bd5eef62d8c3964c35bbc22b4965093a7c57f4/diff:/var/lib/docker/overlay2/5cb3ff6e7921802d23fd6cba2490df2c36857ad3d97eedcf6e537d1527a37f60/diff:/var/lib/docker/overlay2/e9074b1b2294cc5a00d3237646a71c4d30d89748eadbae26dd532d6a475c9851/diff:/var/lib/docker/overlay2/df8929c4859ecf2c8807040b0174a3f0dfa8da01532ea961cd7f4f2f0ccfbfc2/diff:/var/lib/docker/overlay2/177329168f9a1043ce94a2b7b32958779e8f685e85abf8e4c335d03d23295b26/diff:/var/lib/docker/overlay2/9eef19133d2d5e0111523234481658c1f73d7552db59842ad9ca687debda8fb3/diff:/var/lib/docker/overlay2/5c3a4cdf915d1e65e3003feefb9b9e649b432658e093e04971d5cb79e610dba9/diff:/var/lib/docker/overlay2/d1aeff9fecf983db994270b4acf2a2310c790c64d42c58d436aec3238f88b36c/diff:/var/lib/docker/overlay2/580dc19745fb3c1352983857ac5a555954893045702e7867e651e15a2d6d8732/diff:/var/lib/docker/overlay2/6c77b3
2028ad3ef74c2faf89b5080529c0391907ee9c1d7bbc00c25a5340238b/diff:/var/lib/docker/overlay2/e9e20374d05015c7a4f19eb8e14e663122cd2aef66d761bedfdef138551230b8/diff:/var/lib/docker/overlay2/cd6d64933d189089a7aaf08d7c7db50b89cc981d03b8272e4fdedb65495c3e4c/diff:/var/lib/docker/overlay2/768244a14ae888bb2a08747182035c24bd2a6a7a67f1ddae7b4e60efccb01339/diff:/var/lib/docker/overlay2/f90a95060678580a1a90ee9a563996249f7606bffcd06b680ef15234b56ab4a6/diff:/var/lib/docker/overlay2/49310159b9be5ac9bfa1dca18d816535ceb12e228963adafcf71f9d7d52d95ec/diff:/var/lib/docker/overlay2/cef5c40c08cddf01fe5ea252f4a28586c0bd1296ddc38fc37fc7e34af83c3c30/diff:/var/lib/docker/overlay2/b31bdcb9b1a69c5230400a4e1f5650c1cc8f9e27e673ff43708f9a450106733e/diff:/var/lib/docker/overlay2/899516301b3ffc21aee7cb9984f97d944fe5af0f2cd0bc5a5895bf187e4bff72/diff:/var/lib/docker/overlay2/f1bd0d2fc2ce458c0b6364dde56610c3d26bf8dc2cca2ae4e34a46f173f44595/diff:/var/lib/docker/overlay2/9f0da14f9c19d5a719554996b34f3ef80a0378921181aaf89ca41339ccc5bee3/diff:/var/lib/d
ocker/overlay2/4d6e8c40f7c65cbcfa827b62273eb37d2a0bb0407ee161b80523f11c157a52d4/diff:/var/lib/docker/overlay2/434355bebe182fb5c97b07ad8cf7a59b5474f8bc71379ff4f9690ffe31837e63/diff:/var/lib/docker/overlay2/128fb199261eb4fea9e79111861223ddb0d84438454cb7c0b9a61cc73d0796c3/diff:/var/lib/docker/overlay2/11c9cb7e5f15ac43115cb739da1135a53e08dc94ee1da27441e1d6c79eb73e62/diff:/var/lib/docker/overlay2/5e6bf225209b025da3db5a2d8862c3a0ee64417fa809d026e010644fc6619b48/diff:/var/lib/docker/overlay2/265a2d12f86abb8a607a06ec6f225186dfe622f7e23cbbf146a2ceffc7bc9bdc/diff:/var/lib/docker/overlay2/e9140b8aea381db0faf833cd2d12ff40267febe63f7c6bc2fa27384dbadf89bc/diff:/var/lib/docker/overlay2/07d7d0ab38cb08c95279e0706a65a6a4aeff8b5e72d559f8bc3dfcc0895654d6/diff:/var/lib/docker/overlay2/3a3d8346fb066863283ad39cf7f43615d7a1e99dc12aa7e8892d85d1c550bc63/diff:/var/lib/docker/overlay2/e5317b212815c832ebd2451ddd6e1f6922f350c5c1651ccfef7c5a8f34b7c1e3/diff:/var/lib/docker/overlay2/3bd7d98f0f0bf7ad2e29344a6ab4ca3211616e390da35077e76565c2074
df852/diff:/var/lib/docker/overlay2/ff63d2045cce1bb51b201079bba9d2b2a666b7331699b9407801b2fc00792bb3/diff:/var/lib/docker/overlay2/545bdf3f77f20e3db68875eda0a8829facb605119a630956ab79323688a3562a/diff:/var/lib/docker/overlay2/8f9b40124ec5ada59184358938542e028ca1e234ef1f6bce1816becf6ec8354e/diff:/var/lib/docker/overlay2/71b919cd2ece4207294cb577adab54fdba87cd9f7fdca1a53d6f376f87f11151/diff:/var/lib/docker/overlay2/b2d4a34fe28bd395d9b4ba0120173fc9df90c36a8a60a309ec088eca8930768c/diff:/var/lib/docker/overlay2/e986f180cbc34391c1ea6665283859b4c3c3061156ea1ff7196ffd869741e591/diff:/var/lib/docker/overlay2/cb98df203d42ea558f11fe84300b687cb65e6f3401b377a4466c9c8ef276ec59/diff:/var/lib/docker/overlay2/6371f9c0528fb1fb4a17bfcf3a34877fdd6d43dca1b5f06aff998d43d2010c46/diff:/var/lib/docker/overlay2/79ceef8e89c4094715c05bafb8c1427d09fb4a99f71f1a6186299303af352c98/diff:/var/lib/docker/overlay2/e034e76eeaff4e68d5158ffe54b55d122124ad82998be242fa08bec3010a0e64/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2aa41a88cdde5c467bdfa45b59d566828bc8d20d6d97e62f0aabe0c340ec65cd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2aa41a88cdde5c467bdfa45b59d566828bc8d20d6d97e62f0aabe0c340ec65cd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2aa41a88cdde5c467bdfa45b59d566828bc8d20d6d97e62f0aabe0c340ec65cd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-20220725130540-44543",
	                "Source": "/var/lib/docker/volumes/pause-20220725130540-44543/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20220725130540-44543",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20220725130540-44543",
	                "name.minikube.sigs.k8s.io": "pause-20220725130540-44543",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "edafa41001b2e2c2d226ba0e97374b79c1d39587feb7746c132a47a3b3a5a4fc",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "55572"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "55573"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "55574"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "55575"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "55576"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/edafa41001b2",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20220725130540-44543": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "453a9dfc9e55",
	                        "pause-20220725130540-44543"
	                    ],
	                    "NetworkID": "39a0b05e9264c5a1820af86d9d9d5024208a205a861b26afc964cee8f0dfdcac",
	                    "EndpointID": "2f1a17f22864c72851eaaa8d1ec74f36b87e001fd9a955ed4b873530acf7e9a7",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20220725130540-44543 -n pause-20220725130540-44543
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20220725130540-44543 -n pause-20220725130540-44543: exit status 2 (16.155339756s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestPause/serial/VerifyStatus FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/VerifyStatus]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p pause-20220725130540-44543 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p pause-20220725130540-44543 logs -n 25: (13.099501144s)
helpers_test.go:252: TestPause/serial/VerifyStatus logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                   |                 Profile                 |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                      | force-systemd-flag-20220725130010-44543 | jenkins | v1.26.0 | 25 Jul 22 13:00 PDT | 25 Jul 22 13:00 PDT |
	|         | force-systemd-flag-20220725130010-44543 |                                         |         |         |                     |                     |
	|         | --memory=2048 --force-systemd           |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=5 --driver=docker  |                                         |         |         |                     |                     |
	| ssh     | force-systemd-env-20220725125947-44543  | force-systemd-env-20220725125947-44543  | jenkins | v1.26.0 | 25 Jul 22 13:00 PDT | 25 Jul 22 13:00 PDT |
	|         | ssh docker info --format                |                                         |         |         |                     |                     |
	|         | {{.CgroupDriver}}                       |                                         |         |         |                     |                     |
	| delete  | -p                                      | force-systemd-env-20220725125947-44543  | jenkins | v1.26.0 | 25 Jul 22 13:00 PDT | 25 Jul 22 13:00 PDT |
	|         | force-systemd-env-20220725125947-44543  |                                         |         |         |                     |                     |
	| start   | -p                                      | docker-flags-20220725130019-44543       | jenkins | v1.26.0 | 25 Jul 22 13:00 PDT | 25 Jul 22 13:00 PDT |
	|         | docker-flags-20220725130019-44543       |                                         |         |         |                     |                     |
	|         | --cache-images=false                    |                                         |         |         |                     |                     |
	|         | --memory=2048                           |                                         |         |         |                     |                     |
	|         | --install-addons=false                  |                                         |         |         |                     |                     |
	|         | --wait=false --docker-env=FOO=BAR       |                                         |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                    |                                         |         |         |                     |                     |
	|         | --docker-opt=debug                      |                                         |         |         |                     |                     |
	|         | --docker-opt=icc=true                   |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=5                  |                                         |         |         |                     |                     |
	|         | --driver=docker                         |                                         |         |         |                     |                     |
	| ssh     | force-systemd-flag-20220725130010-44543 | force-systemd-flag-20220725130010-44543 | jenkins | v1.26.0 | 25 Jul 22 13:00 PDT | 25 Jul 22 13:00 PDT |
	|         | ssh docker info --format                |                                         |         |         |                     |                     |
	|         | {{.CgroupDriver}}                       |                                         |         |         |                     |                     |
	| delete  | -p                                      | force-systemd-flag-20220725130010-44543 | jenkins | v1.26.0 | 25 Jul 22 13:00 PDT | 25 Jul 22 13:00 PDT |
	|         | force-systemd-flag-20220725130010-44543 |                                         |         |         |                     |                     |
	| start   | -p                                      | cert-expiration-20220725130044-44543    | jenkins | v1.26.0 | 25 Jul 22 13:00 PDT | 25 Jul 22 13:01 PDT |
	|         | cert-expiration-20220725130044-44543    |                                         |         |         |                     |                     |
	|         | --memory=2048 --cert-expiration=3m      |                                         |         |         |                     |                     |
	|         | --driver=docker                         |                                         |         |         |                     |                     |
	| ssh     | docker-flags-20220725130019-44543       | docker-flags-20220725130019-44543       | jenkins | v1.26.0 | 25 Jul 22 13:00 PDT | 25 Jul 22 13:00 PDT |
	|         | ssh sudo systemctl show docker          |                                         |         |         |                     |                     |
	|         | --property=Environment --no-pager       |                                         |         |         |                     |                     |
	| ssh     | docker-flags-20220725130019-44543       | docker-flags-20220725130019-44543       | jenkins | v1.26.0 | 25 Jul 22 13:00 PDT | 25 Jul 22 13:00 PDT |
	|         | ssh sudo systemctl show docker          |                                         |         |         |                     |                     |
	|         | --property=ExecStart --no-pager         |                                         |         |         |                     |                     |
	| delete  | -p                                      | docker-flags-20220725130019-44543       | jenkins | v1.26.0 | 25 Jul 22 13:00 PDT | 25 Jul 22 13:00 PDT |
	|         | docker-flags-20220725130019-44543       |                                         |         |         |                     |                     |
	| start   | -p                                      | cert-options-20220725130052-44543       | jenkins | v1.26.0 | 25 Jul 22 13:00 PDT | 25 Jul 22 13:01 PDT |
	|         | cert-options-20220725130052-44543       |                                         |         |         |                     |                     |
	|         | --memory=2048                           |                                         |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1               |                                         |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15           |                                         |         |         |                     |                     |
	|         | --apiserver-names=localhost             |                                         |         |         |                     |                     |
	|         | --apiserver-names=www.google.com        |                                         |         |         |                     |                     |
	|         | --apiserver-port=8555                   |                                         |         |         |                     |                     |
	|         | --driver=docker                         |                                         |         |         |                     |                     |
	|         | --apiserver-name=localhost              |                                         |         |         |                     |                     |
	| ssh     | cert-options-20220725130052-44543       | cert-options-20220725130052-44543       | jenkins | v1.26.0 | 25 Jul 22 13:01 PDT | 25 Jul 22 13:01 PDT |
	|         | ssh openssl x509 -text -noout -in       |                                         |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt   |                                         |         |         |                     |                     |
	| ssh     | -p                                      | cert-options-20220725130052-44543       | jenkins | v1.26.0 | 25 Jul 22 13:01 PDT | 25 Jul 22 13:01 PDT |
	|         | cert-options-20220725130052-44543       |                                         |         |         |                     |                     |
	|         | -- sudo cat                             |                                         |         |         |                     |                     |
	|         | /etc/kubernetes/admin.conf              |                                         |         |         |                     |                     |
	| delete  | -p                                      | cert-options-20220725130052-44543       | jenkins | v1.26.0 | 25 Jul 22 13:01 PDT | 25 Jul 22 13:01 PDT |
	|         | cert-options-20220725130052-44543       |                                         |         |         |                     |                     |
	| delete  | -p                                      | running-upgrade-20220725130124-44543    | jenkins | v1.26.0 | 25 Jul 22 13:02 PDT | 25 Jul 22 13:02 PDT |
	|         | running-upgrade-20220725130124-44543    |                                         |         |         |                     |                     |
	| delete  | -p                                      | missing-upgrade-20220725130231-44543    | jenkins | v1.26.0 | 25 Jul 22 13:03 PDT | 25 Jul 22 13:03 PDT |
	|         | missing-upgrade-20220725130231-44543    |                                         |         |         |                     |                     |
	| start   | -p                                      | kubernetes-upgrade-20220725130322-44543 | jenkins | v1.26.0 | 25 Jul 22 13:03 PDT |                     |
	|         | kubernetes-upgrade-20220725130322-44543 |                                         |         |         |                     |                     |
	|         | --memory=2200                           |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0            |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --driver=docker  |                                         |         |         |                     |                     |
	| start   | -p                                      | cert-expiration-20220725130044-44543    | jenkins | v1.26.0 | 25 Jul 22 13:04 PDT | 25 Jul 22 13:04 PDT |
	|         | cert-expiration-20220725130044-44543    |                                         |         |         |                     |                     |
	|         | --memory=2048                           |                                         |         |         |                     |                     |
	|         | --cert-expiration=8760h                 |                                         |         |         |                     |                     |
	|         | --driver=docker                         |                                         |         |         |                     |                     |
	| delete  | -p                                      | cert-expiration-20220725130044-44543    | jenkins | v1.26.0 | 25 Jul 22 13:04 PDT | 25 Jul 22 13:04 PDT |
	|         | cert-expiration-20220725130044-44543    |                                         |         |         |                     |                     |
	| delete  | -p                                      | stopped-upgrade-20220725130445-44543    | jenkins | v1.26.0 | 25 Jul 22 13:05 PDT | 25 Jul 22 13:05 PDT |
	|         | stopped-upgrade-20220725130445-44543    |                                         |         |         |                     |                     |
	| start   | -p pause-20220725130540-44543           | pause-20220725130540-44543              | jenkins | v1.26.0 | 25 Jul 22 13:05 PDT | 25 Jul 22 13:06 PDT |
	|         | --memory=2048                           |                                         |         |         |                     |                     |
	|         | --install-addons=false                  |                                         |         |         |                     |                     |
	|         | --wait=all --driver=docker              |                                         |         |         |                     |                     |
	| start   | -p pause-20220725130540-44543           | pause-20220725130540-44543              | jenkins | v1.26.0 | 25 Jul 22 13:06 PDT | 25 Jul 22 13:07 PDT |
	|         | --alsologtostderr -v=1                  |                                         |         |         |                     |                     |
	|         | --driver=docker                         |                                         |         |         |                     |                     |
	| pause   | -p pause-20220725130540-44543           | pause-20220725130540-44543              | jenkins | v1.26.0 | 25 Jul 22 13:07 PDT | 25 Jul 22 13:07 PDT |
	|         | --alsologtostderr -v=5                  |                                         |         |         |                     |                     |
	| stop    | -p                                      | kubernetes-upgrade-20220725130322-44543 | jenkins | v1.26.0 | 25 Jul 22 13:07 PDT | 25 Jul 22 13:07 PDT |
	|         | kubernetes-upgrade-20220725130322-44543 |                                         |         |         |                     |                     |
	| start   | -p                                      | kubernetes-upgrade-20220725130322-44543 | jenkins | v1.26.0 | 25 Jul 22 13:07 PDT |                     |
	|         | kubernetes-upgrade-20220725130322-44543 |                                         |         |         |                     |                     |
	|         | --memory=2200                           |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2            |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --driver=docker  |                                         |         |         |                     |                     |
	|---------|-----------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/25 13:07:38
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 13:07:38.992988   56961 out.go:296] Setting OutFile to fd 1 ...
	I0725 13:07:38.993146   56961 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 13:07:38.993151   56961 out.go:309] Setting ErrFile to fd 2...
	I0725 13:07:38.993155   56961 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 13:07:38.993256   56961 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/bin
	I0725 13:07:38.993690   56961 out.go:303] Setting JSON to false
	I0725 13:07:39.008550   56961 start.go:115] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":14831,"bootTime":1658764828,"procs":352,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0725 13:07:39.008690   56961 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0725 13:07:39.031416   56961 out.go:177] * [kubernetes-upgrade-20220725130322-44543] minikube v1.26.0 on Darwin 12.4
	I0725 13:07:39.075075   56961 notify.go:193] Checking for updates...
	I0725 13:07:39.097129   56961 out.go:177]   - MINIKUBE_LOCATION=14555
	I0725 13:07:39.119021   56961 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	I0725 13:07:39.141213   56961 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 13:07:39.163126   56961 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 13:07:39.184275   56961 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube
	I0725 13:07:39.206621   56961 config.go:178] Loaded profile config "kubernetes-upgrade-20220725130322-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0725 13:07:39.207243   56961 driver.go:365] Setting default libvirt URI to qemu:///system
	I0725 13:07:39.276958   56961 docker.go:137] docker version: linux-20.10.17
	I0725 13:07:39.277086   56961 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 13:07:39.411819   56961 info.go:265] docker info: {ID:DEPZ:X5EQ:JUVI:7TAY:IXPP:M3QT:KGJK:NK3N:YSWM:HAHS:YLUF:RBE6 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 20:07:39.344702323 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 13:07:39.455570   56961 out.go:177] * Using the docker driver based on existing profile
	I0725 13:07:39.477676   56961 start.go:284] selected driver: docker
	I0725 13:07:39.477704   56961 start.go:808] validating driver "docker" against &{Name:kubernetes-upgrade-20220725130322-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220725130322-
44543 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath:}
	I0725 13:07:39.477880   56961 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 13:07:39.481155   56961 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 13:07:39.614204   56961 info.go:265] docker info: {ID:DEPZ:X5EQ:JUVI:7TAY:IXPP:M3QT:KGJK:NK3N:YSWM:HAHS:YLUF:RBE6 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 20:07:39.54804057 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 13:07:39.614351   56961 cni.go:95] Creating CNI manager for ""
	I0725 13:07:39.614362   56961 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 13:07:39.614372   56961 start_flags.go:310] config:
	{Name:kubernetes-upgrade-20220725130322-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:kubernetes-upgrade-20220725130322-44543 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 13:07:39.636537   56961 out.go:177] * Starting control plane node kubernetes-upgrade-20220725130322-44543 in cluster kubernetes-upgrade-20220725130322-44543
	I0725 13:07:39.658006   56961 cache.go:120] Beginning downloading kic base image for docker with docker
	I0725 13:07:39.680266   56961 out.go:177] * Pulling base image ...
	I0725 13:07:39.723099   56961 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0725 13:07:39.723103   56961 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0725 13:07:39.723217   56961 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4
	I0725 13:07:39.723243   56961 cache.go:57] Caching tarball of preloaded images
	I0725 13:07:39.723438   56961 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0725 13:07:39.724064   56961 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.2 on docker
	I0725 13:07:39.724515   56961 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543/config.json ...
	I0725 13:07:39.788604   56961 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0725 13:07:39.788622   56961 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0725 13:07:39.788634   56961 cache.go:208] Successfully downloaded all kic artifacts
	I0725 13:07:39.788683   56961 start.go:370] acquiring machines lock for kubernetes-upgrade-20220725130322-44543: {Name:mk3e7763670f2855f6746ef40eb840a24b5302f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 13:07:39.788766   56961 start.go:374] acquired machines lock for "kubernetes-upgrade-20220725130322-44543" in 60.163µs
	I0725 13:07:39.788785   56961 start.go:95] Skipping create...Using existing machine configuration
	I0725 13:07:39.788796   56961 fix.go:55] fixHost starting: 
	I0725 13:07:39.789060   56961 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220725130322-44543 --format={{.State.Status}}
	I0725 13:07:39.857198   56961 fix.go:103] recreateIfNeeded on kubernetes-upgrade-20220725130322-44543: state=Stopped err=<nil>
	W0725 13:07:39.857228   56961 fix.go:129] unexpected machine state, will restart: <nil>
	I0725 13:07:39.879433   56961 out.go:177] * Restarting existing docker container for "kubernetes-upgrade-20220725130322-44543" ...
	I0725 13:07:39.922752   56961 cli_runner.go:164] Run: docker start kubernetes-upgrade-20220725130322-44543
	I0725 13:07:40.271961   56961 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220725130322-44543 --format={{.State.Status}}
	I0725 13:07:40.346897   56961 kic.go:415] container "kubernetes-upgrade-20220725130322-44543" state is running.
	I0725 13:07:40.347842   56961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220725130322-44543
	I0725 13:07:40.428302   56961 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543/config.json ...
	I0725 13:07:40.428813   56961 machine.go:88] provisioning docker machine ...
	I0725 13:07:40.428838   56961 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-20220725130322-44543"
	I0725 13:07:40.428962   56961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725130322-44543
	I0725 13:07:40.506695   56961 main.go:134] libmachine: Using SSH client type: native
	I0725 13:07:40.506905   56961 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 55792 <nil> <nil>}
	I0725 13:07:40.506934   56961 main.go:134] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-20220725130322-44543 && echo "kubernetes-upgrade-20220725130322-44543" | sudo tee /etc/hostname
	I0725 13:07:40.635836   56961 main.go:134] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-20220725130322-44543
	
	I0725 13:07:40.635914   56961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725130322-44543
	I0725 13:07:40.710256   56961 main.go:134] libmachine: Using SSH client type: native
	I0725 13:07:40.710395   56961 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 55792 <nil> <nil>}
	I0725 13:07:40.710420   56961 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-20220725130322-44543' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-20220725130322-44543/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-20220725130322-44543' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 13:07:40.831590   56961 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 13:07:40.831613   56961 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube}
	I0725 13:07:40.831645   56961 ubuntu.go:177] setting up certificates
	I0725 13:07:40.831657   56961 provision.go:83] configureAuth start
	I0725 13:07:40.831718   56961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220725130322-44543
	I0725 13:07:40.903146   56961 provision.go:138] copyHostCerts
	I0725 13:07:40.903231   56961 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem, removing ...
	I0725 13:07:40.903241   56961 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem
	I0725 13:07:40.903328   56961 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem (1082 bytes)
	I0725 13:07:40.903532   56961 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem, removing ...
	I0725 13:07:40.903540   56961 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem
	I0725 13:07:40.903606   56961 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem (1123 bytes)
	I0725 13:07:40.903739   56961 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem, removing ...
	I0725 13:07:40.903744   56961 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem
	I0725 13:07:40.903798   56961 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem (1675 bytes)
	I0725 13:07:40.903912   56961 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-20220725130322-44543 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-20220725130322-44543]
	I0725 13:07:40.982399   56961 provision.go:172] copyRemoteCerts
	I0725 13:07:40.982455   56961 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 13:07:40.982506   56961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725130322-44543
	I0725 13:07:41.053394   56961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55792 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/kubernetes-upgrade-20220725130322-44543/id_rsa Username:docker}
	I0725 13:07:41.141886   56961 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0725 13:07:41.158682   56961 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem --> /etc/docker/server.pem (1289 bytes)
	I0725 13:07:41.175793   56961 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0725 13:07:41.191949   56961 provision.go:86] duration metric: configureAuth took 360.272947ms
	I0725 13:07:41.191960   56961 ubuntu.go:193] setting minikube options for container-runtime
	I0725 13:07:41.192085   56961 config.go:178] Loaded profile config "kubernetes-upgrade-20220725130322-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0725 13:07:41.192135   56961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725130322-44543
	I0725 13:07:41.263179   56961 main.go:134] libmachine: Using SSH client type: native
	I0725 13:07:41.263330   56961 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 55792 <nil> <nil>}
	I0725 13:07:41.263340   56961 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 13:07:41.388548   56961 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0725 13:07:41.388565   56961 ubuntu.go:71] root file system type: overlay
	I0725 13:07:41.388784   56961 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 13:07:41.388851   56961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725130322-44543
	I0725 13:07:41.459082   56961 main.go:134] libmachine: Using SSH client type: native
	I0725 13:07:41.459243   56961 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 55792 <nil> <nil>}
	I0725 13:07:41.459297   56961 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 13:07:41.588169   56961 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 13:07:41.588259   56961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725130322-44543
	I0725 13:07:41.660099   56961 main.go:134] libmachine: Using SSH client type: native
	I0725 13:07:41.660238   56961 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 55792 <nil> <nil>}
	I0725 13:07:41.660253   56961 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 13:07:41.785339   56961 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 13:07:41.785351   56961 machine.go:91] provisioned docker machine in 1.356503347s
	I0725 13:07:41.785363   56961 start.go:307] post-start starting for "kubernetes-upgrade-20220725130322-44543" (driver="docker")
	I0725 13:07:41.785370   56961 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 13:07:41.785436   56961 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 13:07:41.785485   56961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725130322-44543
	I0725 13:07:41.856503   56961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55792 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/kubernetes-upgrade-20220725130322-44543/id_rsa Username:docker}
	I0725 13:07:41.943065   56961 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 13:07:41.946514   56961 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0725 13:07:41.946533   56961 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0725 13:07:41.946544   56961 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0725 13:07:41.946550   56961 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0725 13:07:41.946561   56961 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/addons for local assets ...
	I0725 13:07:41.946668   56961 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files for local assets ...
	I0725 13:07:41.946829   56961 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem -> 445432.pem in /etc/ssl/certs
	I0725 13:07:41.946985   56961 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 13:07:41.954020   56961 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem --> /etc/ssl/certs/445432.pem (1708 bytes)
	I0725 13:07:41.970625   56961 start.go:310] post-start completed in 185.247578ms
	I0725 13:07:41.970689   56961 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 13:07:41.970750   56961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725130322-44543
	I0725 13:07:42.042648   56961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55792 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/kubernetes-upgrade-20220725130322-44543/id_rsa Username:docker}
	I0725 13:07:42.134670   56961 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0725 13:07:42.138929   56961 fix.go:57] fixHost completed within 2.350088368s
	I0725 13:07:42.138943   56961 start.go:82] releasing machines lock for "kubernetes-upgrade-20220725130322-44543", held for 2.35012378s
	I0725 13:07:42.139013   56961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220725130322-44543
	I0725 13:07:42.210266   56961 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0725 13:07:42.210276   56961 ssh_runner.go:195] Run: systemctl --version
	I0725 13:07:42.210339   56961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725130322-44543
	I0725 13:07:42.210338   56961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725130322-44543
	I0725 13:07:42.287245   56961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55792 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/kubernetes-upgrade-20220725130322-44543/id_rsa Username:docker}
	I0725 13:07:42.288944   56961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55792 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/kubernetes-upgrade-20220725130322-44543/id_rsa Username:docker}
	I0725 13:07:42.522883   56961 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 13:07:42.533940   56961 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0725 13:07:42.533994   56961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 13:07:42.546040   56961 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 13:07:42.558559   56961 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 13:07:42.626746   56961 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 13:07:42.692747   56961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 13:07:42.761404   56961 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 13:07:42.952337   56961 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0725 13:07:43.020923   56961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 13:07:43.090566   56961 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0725 13:07:43.101088   56961 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0725 13:07:43.101155   56961 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0725 13:07:43.104834   56961 start.go:471] Will wait 60s for crictl version
	I0725 13:07:43.104890   56961 ssh_runner.go:195] Run: sudo crictl version
	I0725 13:07:43.203305   56961 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0725 13:07:43.203369   56961 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 13:07:43.237718   56961 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 13:07:43.316914   56961 out.go:204] * Preparing Kubernetes v1.24.2 on Docker 20.10.17 ...
	I0725 13:07:43.317138   56961 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-20220725130322-44543 dig +short host.docker.internal
	I0725 13:07:43.448876   56961 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0725 13:07:43.448978   56961 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0725 13:07:43.453030   56961 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 13:07:43.462274   56961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725130322-44543
	I0725 13:07:43.533081   56961 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0725 13:07:43.533137   56961 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 13:07:43.561319   56961 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0725 13:07:43.561332   56961 docker.go:617] k8s.gcr.io/kube-apiserver:v1.24.2 wasn't preloaded
	I0725 13:07:43.561391   56961 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0725 13:07:43.568414   56961 ssh_runner.go:195] Run: which lz4
	I0725 13:07:43.571727   56961 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0725 13:07:43.575202   56961 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0725 13:07:43.575219   56961 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (425551914 bytes)
	I0725 13:07:50.200650   56961 docker.go:576] Took 6.628827 seconds to copy over tarball
	I0725 13:07:50.200718   56961 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0725 13:07:52.042099   56961 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.841318966s)
	I0725 13:07:52.042113   56961 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0725 13:07:52.098442   56961 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0725 13:07:52.105802   56961 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2814 bytes)
	I0725 13:07:52.119174   56961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 13:07:52.189750   56961 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 13:07:53.302696   56961 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.112903029s)
	I0725 13:07:53.302781   56961 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 13:07:53.333389   56961 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.2
	k8s.gcr.io/kube-proxy:v1.24.2
	k8s.gcr.io/kube-controller-manager:v1.24.2
	k8s.gcr.io/kube-scheduler:v1.24.2
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	<none>:<none>
	<none>:<none>
	<none>:<none>
	<none>:<none>
	<none>:<none>
	k8s.gcr.io/coredns:1.6.2
	<none>:<none>
	
	-- /stdout --
	I0725 13:07:53.333408   56961 cache_images.go:84] Images are preloaded, skipping loading
	I0725 13:07:53.333469   56961 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0725 13:07:53.405325   56961 cni.go:95] Creating CNI manager for ""
	I0725 13:07:53.405342   56961 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 13:07:53.405353   56961 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0725 13:07:53.405368   56961 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-20220725130322-44543 NodeName:kubernetes-upgrade-20220725130322-44543 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:system
d ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0725 13:07:53.405463   56961 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "kubernetes-upgrade-20220725130322-44543"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 13:07:53.405529   56961 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=kubernetes-upgrade-20220725130322-44543 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.2 ClusterName:kubernetes-upgrade-20220725130322-44543 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0725 13:07:53.405583   56961 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
	I0725 13:07:53.412965   56961 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 13:07:53.413012   56961 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 13:07:53.419989   56961 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (501 bytes)
	I0725 13:07:53.432594   56961 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 13:07:53.444733   56961 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2061 bytes)
	I0725 13:07:53.457132   56961 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0725 13:07:53.460846   56961 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 13:07:53.479480   56961 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543 for IP: 192.168.76.2
	I0725 13:07:53.479597   56961 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.key
	I0725 13:07:53.479646   56961 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.key
	I0725 13:07:53.479741   56961 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543/client.key
	I0725 13:07:53.479802   56961 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543/apiserver.key.31bdca25
	I0725 13:07:53.479859   56961 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543/proxy-client.key
	I0725 13:07:53.480050   56961 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543.pem (1338 bytes)
	W0725 13:07:53.480085   56961 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543_empty.pem, impossibly tiny 0 bytes
	I0725 13:07:53.480098   56961 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 13:07:53.480129   56961 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem (1082 bytes)
	I0725 13:07:53.480158   56961 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem (1123 bytes)
	I0725 13:07:53.480190   56961 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem (1675 bytes)
	I0725 13:07:53.480252   56961 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem (1708 bytes)
	I0725 13:07:53.480721   56961 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0725 13:07:53.499401   56961 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 13:07:53.516042   56961 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 13:07:53.533108   56961 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0725 13:07:53.549512   56961 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 13:07:53.566134   56961 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 13:07:53.582706   56961 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 13:07:53.599806   56961 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0725 13:07:53.616263   56961 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543.pem --> /usr/share/ca-certificates/44543.pem (1338 bytes)
	I0725 13:07:53.633039   56961 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem --> /usr/share/ca-certificates/445432.pem (1708 bytes)
	I0725 13:07:53.650046   56961 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 13:07:53.667405   56961 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 13:07:53.679816   56961 ssh_runner.go:195] Run: openssl version
	I0725 13:07:53.685087   56961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44543.pem && ln -fs /usr/share/ca-certificates/44543.pem /etc/ssl/certs/44543.pem"
	I0725 13:07:53.693025   56961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44543.pem
	I0725 13:07:53.696765   56961 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 25 19:24 /usr/share/ca-certificates/44543.pem
	I0725 13:07:53.696804   56961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44543.pem
	I0725 13:07:53.702139   56961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/44543.pem /etc/ssl/certs/51391683.0"
	I0725 13:07:53.709285   56961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/445432.pem && ln -fs /usr/share/ca-certificates/445432.pem /etc/ssl/certs/445432.pem"
	I0725 13:07:53.716667   56961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/445432.pem
	I0725 13:07:53.720678   56961 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 25 19:24 /usr/share/ca-certificates/445432.pem
	I0725 13:07:53.720719   56961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/445432.pem
	I0725 13:07:53.726206   56961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/445432.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 13:07:53.733904   56961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 13:07:53.741476   56961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 13:07:53.745448   56961 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 25 19:19 /usr/share/ca-certificates/minikubeCA.pem
	I0725 13:07:53.745482   56961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 13:07:53.750637   56961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 13:07:53.757592   56961 kubeadm.go:395] StartCluster: {Name:kubernetes-upgrade-20220725130322-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:kubernetes-upgrade-20220725130322-44543 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
}
	I0725 13:07:53.757676   56961 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 13:07:53.786972   56961 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 13:07:53.794364   56961 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0725 13:07:53.794378   56961 kubeadm.go:626] restartCluster start
	I0725 13:07:53.794434   56961 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 13:07:53.801078   56961 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:07:53.801135   56961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-20220725130322-44543
	I0725 13:07:53.872888   56961 kubeconfig.go:116] verify returned: extract IP: "kubernetes-upgrade-20220725130322-44543" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	I0725 13:07:53.873092   56961 kubeconfig.go:127] "kubernetes-upgrade-20220725130322-44543" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig - will repair!
	I0725 13:07:53.873476   56961 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig: {Name:mk33e723b045d7cbb2c524ed31a7e78a4a0b6415 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 13:07:53.874358   56961 kapi.go:59] client config for kubernetes-upgrade-20220725130322-44543: &rest.Config{Host:"https://127.0.0.1:55791", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubernetes-upgrade-20220725130322-44543/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kuber
netes-upgrade-20220725130322-44543/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22fcfe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0725 13:07:53.874842   56961 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 13:07:53.882640   56961 kubeadm.go:593] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2022-07-25 20:03:39.391670890 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2022-07-25 20:07:53.473115894 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.76.2
	@@ -11,13 +11,13 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/dockershim.sock
	+  criSocket: /var/run/cri-dockerd.sock
	   name: "kubernetes-upgrade-20220725130322-44543"
	   kubeletExtraArgs:
	     node-ip: 192.168.76.2
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	@@ -31,16 +31,14 @@
	   extraArgs:
	     leader-elect: "false"
	 certificatesDir: /var/lib/minikube/certs
	-clusterName: kubernetes-upgrade-20220725130322-44543
	+clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	-dns:
	-  type: CoreDNS
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	     extraArgs:
	-      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	-kubernetesVersion: v1.16.0
	+      proxy-refresh-interval: "70000"
	+kubernetesVersion: v1.24.2
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I0725 13:07:53.882654   56961 kubeadm.go:1092] stopping kube-system containers ...
	I0725 13:07:53.882719   56961 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 13:07:53.910418   56961 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 13:07:53.920605   56961 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 13:07:53.928075   56961 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5759 Jul 25 20:05 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5799 Jul 25 20:05 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5959 Jul 25 20:05 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5747 Jul 25 20:05 /etc/kubernetes/scheduler.conf
	
	I0725 13:07:53.928125   56961 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 13:07:53.935683   56961 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 13:07:53.943072   56961 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 13:07:53.949977   56961 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 13:07:53.957457   56961 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 13:07:53.964522   56961 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0725 13:07:53.964532   56961 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:07:54.034981   56961 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:07:54.688815   56961 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:07:54.866093   56961 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:07:54.913036   56961 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:07:54.958815   56961 api_server.go:51] waiting for apiserver process to appear ...
	I0725 13:07:54.958881   56961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:07:55.473617   56961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:07:55.971679   56961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:07:56.472616   56961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:07:56.972560   56961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:07:57.472560   56961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:07:57.972473   56961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:07:58.471663   56961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:07:58.972569   56961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:07:58.982531   56961 api_server.go:71] duration metric: took 4.023641216s to wait for apiserver process to appear ...
	I0725 13:07:58.982549   56961 api_server.go:87] waiting for apiserver healthz status ...
	I0725 13:07:58.982567   56961 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55791/healthz ...
	I0725 13:08:03.983280   56961 api_server.go:256] stopped: https://127.0.0.1:55791/healthz: Get "https://127.0.0.1:55791/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	
	* 
	* ==> Docker <==
	* -- Logs begin at Mon 2022-07-25 20:05:47 UTC, end at Mon 2022-07-25 20:08:07 UTC. --
	Jul 25 20:06:38 pause-20220725130540-44543 dockerd[4040]: time="2022-07-25T20:06:38.245166004Z" level=info msg="ignoring event" container=41e2a5b52ee3f71584ecaf79abd72e3986b1a9f7c2296fc13b1ab5cb8f5c1ea1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:06:38 pause-20220725130540-44543 dockerd[4040]: time="2022-07-25T20:06:38.314377971Z" level=info msg="ignoring event" container=86587cb729ef363f7dd491ced65a49a408ea8111a631614d312d28aebd73f01b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:06:38 pause-20220725130540-44543 dockerd[4040]: time="2022-07-25T20:06:38.314405101Z" level=info msg="ignoring event" container=cc0d247d0c65f2f6da67abbc36ca6c9b7ffcc8ee524133888580cc45df276100 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:06:38 pause-20220725130540-44543 dockerd[4040]: time="2022-07-25T20:06:38.318738784Z" level=info msg="ignoring event" container=495fb048a85fef35879907f56f59ca69dfc1b4d6aee6223cd1338b164dc3acb3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:06:43 pause-20220725130540-44543 dockerd[4040]: time="2022-07-25T20:06:43.232560902Z" level=info msg="ignoring event" container=70028ce4d567e666829d21ea367c9f54206daaead11d0ba1395aade6d1a038dd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:06:48 pause-20220725130540-44543 dockerd[4040]: time="2022-07-25T20:06:48.210110714Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=9ee38c2918907891e80e9751920b825ed1036aeb2e5ea00b39017198e0130779
	Jul 25 20:06:48 pause-20220725130540-44543 dockerd[4040]: time="2022-07-25T20:06:48.305578276Z" level=info msg="ignoring event" container=9ee38c2918907891e80e9751920b825ed1036aeb2e5ea00b39017198e0130779 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:06:48 pause-20220725130540-44543 dockerd[4040]: time="2022-07-25T20:06:48.456907152Z" level=info msg="Removing stale sandbox b3f435778a6adaf39f024047f9610e1963b7667867ea949ce30945110ee5dc46 (41e2a5b52ee3f71584ecaf79abd72e3986b1a9f7c2296fc13b1ab5cb8f5c1ea1)"
	Jul 25 20:06:48 pause-20220725130540-44543 dockerd[4040]: time="2022-07-25T20:06:48.458211386Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint fdcf6e74fb304d9cec9dba33209e975db7e30f4c3e1b0ba51eee2e9d5cc1720d 3a1a5f82ac6dae507d68dd8af15e217a3840d1bb6acc26594b5167ae686bc582], retrying...."
	Jul 25 20:06:48 pause-20220725130540-44543 dockerd[4040]: time="2022-07-25T20:06:48.543828817Z" level=info msg="Removing stale sandbox 1405a010f7d3da96aef2168eb36892f865a500aa272a8b4369d9b4cbeda34746 (e9dc78b760f15f2f75b28b4fe8396eff9be05da98774ada51fecd80432dd96b5)"
	Jul 25 20:06:48 pause-20220725130540-44543 dockerd[4040]: time="2022-07-25T20:06:48.545054392Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint fdcf6e74fb304d9cec9dba33209e975db7e30f4c3e1b0ba51eee2e9d5cc1720d f6d661445642ead3eb66e51fa28a26ab96edce9a937193e5803ea8f7340b2cb4], retrying...."
	Jul 25 20:06:48 pause-20220725130540-44543 dockerd[4040]: time="2022-07-25T20:06:48.629544718Z" level=info msg="Removing stale sandbox 4b5b90f2e1c241aea72a7755dc5d5c068b9b479998795ba9fad2da68c2370f60 (5ce70dd10ec5b6e975abb525c593156d3f20ad970cdd10006d0921fa5d95a340)"
	Jul 25 20:06:48 pause-20220725130540-44543 dockerd[4040]: time="2022-07-25T20:06:48.632953604Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 4ea4f48a5d277ede759a66fdbc7313fc30d249b37998ccf75740a4b46887018d 4d20b904770e4b591929201bf293750ee40a98aad7c8396ca9bc71fffa02ceb7], retrying...."
	Jul 25 20:06:48 pause-20220725130540-44543 dockerd[4040]: time="2022-07-25T20:06:48.727329042Z" level=info msg="Removing stale sandbox 6ceacee6bff8d50e238e12723452ca03724e38f30488255f6c334963b4967138 (495fb048a85fef35879907f56f59ca69dfc1b4d6aee6223cd1338b164dc3acb3)"
	Jul 25 20:06:48 pause-20220725130540-44543 dockerd[4040]: time="2022-07-25T20:06:48.728625792Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint fdcf6e74fb304d9cec9dba33209e975db7e30f4c3e1b0ba51eee2e9d5cc1720d 056e414cfabb01bd068ad9cd7a43d079e861b430c972ecfe78a5fb2046136d43], retrying...."
	Jul 25 20:06:48 pause-20220725130540-44543 dockerd[4040]: time="2022-07-25T20:06:48.814330910Z" level=info msg="Removing stale sandbox 8b74d6c7ab716074b23c54021e46dfed232267f76a4f84b06f12331de3012ecc (cc0d247d0c65f2f6da67abbc36ca6c9b7ffcc8ee524133888580cc45df276100)"
	Jul 25 20:06:48 pause-20220725130540-44543 dockerd[4040]: time="2022-07-25T20:06:48.815659167Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint fdcf6e74fb304d9cec9dba33209e975db7e30f4c3e1b0ba51eee2e9d5cc1720d 776ea7ac0311aff665f8ef3e66daf3fd3c82ce8f868725f5fcbc334649b064b6], retrying...."
	Jul 25 20:06:48 pause-20220725130540-44543 dockerd[4040]: time="2022-07-25T20:06:48.837992777Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 25 20:06:48 pause-20220725130540-44543 dockerd[4040]: time="2022-07-25T20:06:48.873264988Z" level=info msg="Loading containers: done."
	Jul 25 20:06:48 pause-20220725130540-44543 dockerd[4040]: time="2022-07-25T20:06:48.884327642Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
	Jul 25 20:06:48 pause-20220725130540-44543 dockerd[4040]: time="2022-07-25T20:06:48.884401330Z" level=info msg="Daemon has completed initialization"
	Jul 25 20:06:48 pause-20220725130540-44543 systemd[1]: Started Docker Application Container Engine.
	Jul 25 20:06:48 pause-20220725130540-44543 dockerd[4040]: time="2022-07-25T20:06:48.910928052Z" level=info msg="API listen on [::]:2376"
	Jul 25 20:06:48 pause-20220725130540-44543 dockerd[4040]: time="2022-07-25T20:06:48.913269835Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 25 20:06:49 pause-20220725130540-44543 dockerd[4040]: time="2022-07-25T20:06:49.137630196Z" level=error msg="0cc997adfa5a8c6922e1acb057550219b7c67f926c255513cba0892c0fcbb01f cleanup: failed to delete container from containerd: no such container"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	fe61a4c6f462f       34cdf99b1bb3b       49 seconds ago       Running             kube-controller-manager   3                   352785f8fc73a
	6dabd1173b792       a4ca41631cc7a       About a minute ago   Running             coredns                   2                   29b1d0b7627af
	452fed047b983       6e38f40d628db       About a minute ago   Running             storage-provisioner       0                   aa38e3b70bf93
	3916b4d5ea10a       aebe758cef4cd       About a minute ago   Running             etcd                      2                   564da12fa33f3
	d2f7f3f647dd8       5d725196c1f47       About a minute ago   Running             kube-scheduler            1                   1ca8cf51651e3
	be38304a8d9e4       d3377ffb7177c       About a minute ago   Running             kube-apiserver            2                   1e4a94dd2fa43
	0cc997adfa5a8       34cdf99b1bb3b       About a minute ago   Created             kube-controller-manager   2                   cc0d247d0c65f
	d8bc4acc6c13c       a634548d10b03       About a minute ago   Running             kube-proxy                2                   57eb985c387b1
	70028ce4d567e       a4ca41631cc7a       About a minute ago   Exited              coredns                   1                   5ce70dd10ec5b
	9ee38c2918907       d3377ffb7177c       About a minute ago   Exited              kube-apiserver            1                   e9dc78b760f15
	86587cb729ef3       a634548d10b03       About a minute ago   Exited              kube-proxy                1                   495fb048a85fe
	1e5ef54fce49d       aebe758cef4cd       About a minute ago   Exited              etcd                      1                   41e2a5b52ee3f
	47602eb09ff17       5d725196c1f47       2 minutes ago        Exited              kube-scheduler            0                   83f266983ff3e
	
	* 
	* ==> coredns [6dabd1173b79] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	
	* 
	* ==> coredns [70028ce4d567] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.001466] FS-Cache: O-key=[8] '17ff1b0300000000'
	[  +0.001104] FS-Cache: N-cookie c=000000004d55404a [p=00000000b8586770 fl=2 nc=0 na=1]
	[  +0.001777] FS-Cache: N-cookie d=0000000042f28ee6 n=00000000d4493a29
	[  +0.001412] FS-Cache: N-key=[8] '17ff1b0300000000'
	[  +0.002998] FS-Cache: Duplicate cookie detected
	[  +0.001046] FS-Cache: O-cookie c=000000009c120975 [p=00000000b8586770 fl=226 nc=0 na=1]
	[  +0.001828] FS-Cache: O-cookie d=0000000042f28ee6 n=00000000da9a003a
	[  +0.001534] FS-Cache: O-key=[8] '17ff1b0300000000'
	[  +0.001185] FS-Cache: N-cookie c=000000004d55404a [p=00000000b8586770 fl=2 nc=0 na=1]
	[  +0.001802] FS-Cache: N-cookie d=0000000042f28ee6 n=00000000e00d4042
	[  +0.001494] FS-Cache: N-key=[8] '17ff1b0300000000'
	[  +3.310474] FS-Cache: Duplicate cookie detected
	[  +0.001024] FS-Cache: O-cookie c=000000006d166cd5 [p=00000000b8586770 fl=226 nc=0 na=1]
	[  +0.001787] FS-Cache: O-cookie d=0000000042f28ee6 n=000000002e66eab5
	[  +0.001771] FS-Cache: O-key=[8] '16ff1b0300000000'
	[  +0.001129] FS-Cache: N-cookie c=000000004eb511b0 [p=00000000b8586770 fl=2 nc=0 na=1]
	[  +0.001806] FS-Cache: N-cookie d=0000000042f28ee6 n=00000000f4062f04
	[  +0.001537] FS-Cache: N-key=[8] '16ff1b0300000000'
	[  +0.463724] FS-Cache: Duplicate cookie detected
	[  +0.001228] FS-Cache: O-cookie c=00000000e0ab6724 [p=00000000b8586770 fl=226 nc=0 na=1]
	[  +0.002297] FS-Cache: O-cookie d=0000000042f28ee6 n=000000004a36e55c
	[  +0.001837] FS-Cache: O-key=[8] '1dff1b0300000000'
	[  +0.001364] FS-Cache: N-cookie c=00000000f2e66760 [p=00000000b8586770 fl=2 nc=0 na=1]
	[  +0.002135] FS-Cache: N-cookie d=0000000042f28ee6 n=0000000079fcd9bf
	[  +0.001675] FS-Cache: N-key=[8] '1dff1b0300000000'
	
	* 
	* ==> etcd [1e5ef54fce49] <==
	* {"level":"info","ts":"2022-07-25T20:06:28.831Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-07-25T20:06:28.831Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-07-25T20:06:28.832Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-07-25T20:06:30.225Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 2"}
	{"level":"info","ts":"2022-07-25T20:06:30.225Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 2"}
	{"level":"info","ts":"2022-07-25T20:06:30.225Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-07-25T20:06:30.225Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 3"}
	{"level":"info","ts":"2022-07-25T20:06:30.225Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2022-07-25T20:06:30.225Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 3"}
	{"level":"info","ts":"2022-07-25T20:06:30.225Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2022-07-25T20:06:30.226Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:pause-20220725130540-44543 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-25T20:06:30.227Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-25T20:06:30.227Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-25T20:06:30.228Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-25T20:06:30.228Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-07-25T20:06:30.228Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2022-07-25T20:06:30.229Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-07-25T20:06:38.190Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-07-25T20:06:38.190Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"pause-20220725130540-44543","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	WARNING: 2022/07/25 20:06:38 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/07/25 20:06:38 [core] grpc: addrConn.createTransport failed to connect to {192.168.67.2:2379 192.168.67.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.67.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-07-25T20:06:38.209Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8688e899f7831fc7","current-leader-member-id":"8688e899f7831fc7"}
	{"level":"info","ts":"2022-07-25T20:06:38.211Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-07-25T20:06:38.212Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-07-25T20:06:38.212Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"pause-20220725130540-44543","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	
	* 
	* ==> etcd [3916b4d5ea10] <==
	* {"level":"info","ts":"2022-07-25T20:06:50.024Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"8688e899f7831fc7","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2022-07-25T20:06:50.024Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2022-07-25T20:06:50.024Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2022-07-25T20:06:50.024Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2022-07-25T20:06:50.024Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T20:06:50.024Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T20:06:50.027Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-07-25T20:06:50.027Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-07-25T20:06:50.027Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-07-25T20:06:50.027Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-07-25T20:06:50.027Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-07-25T20:06:51.218Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 3"}
	{"level":"info","ts":"2022-07-25T20:06:51.218Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 3"}
	{"level":"info","ts":"2022-07-25T20:06:51.218Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2022-07-25T20:06:51.218Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 4"}
	{"level":"info","ts":"2022-07-25T20:06:51.218Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2022-07-25T20:06:51.218Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 4"}
	{"level":"info","ts":"2022-07-25T20:06:51.218Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2022-07-25T20:06:51.219Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:pause-20220725130540-44543 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-25T20:06:51.219Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-25T20:06:51.220Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-25T20:06:51.220Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-25T20:06:51.220Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-07-25T20:06:51.221Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2022-07-25T20:06:51.221Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  20:08:18 up 49 min,  0 users,  load average: 0.32, 0.79, 0.82
	Linux pause-20220725130540-44543 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [9ee38c291890] <==
	* W0725 20:06:47.327516       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:06:47.354830       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:06:47.409800       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:06:47.417972       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:06:47.595162       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:06:47.603185       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:06:47.639204       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:06:47.644536       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:06:47.648822       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:06:47.703097       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:06:47.736244       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:06:47.745894       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:06:47.763023       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:06:47.792766       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:06:47.808279       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:06:47.845407       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:06:47.917622       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:06:47.956408       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:06:48.007002       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:06:48.017388       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:06:48.081820       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:06:48.131384       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:06:48.164324       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:06:48.176241       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:06:48.191421       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	* 
	* ==> kube-apiserver [be38304a8d9e] <==
	* I0725 20:06:52.884590       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0725 20:06:52.884596       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0725 20:06:52.888127       1 available_controller.go:491] Starting AvailableConditionController
	I0725 20:06:52.888133       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0725 20:06:52.890520       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0725 20:06:52.890544       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0725 20:06:52.896541       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0725 20:06:52.900129       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E0725 20:06:52.932480       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0725 20:06:52.954565       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0725 20:06:53.005885       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0725 20:06:53.006207       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0725 20:06:53.006185       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0725 20:06:53.007881       1 cache.go:39] Caches are synced for autoregister controller
	I0725 20:06:53.008189       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0725 20:06:53.008328       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0725 20:06:53.025316       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0725 20:06:53.032789       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0725 20:06:53.667307       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0725 20:06:53.884896       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0725 20:06:55.399426       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0725 20:06:55.413301       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0725 20:06:55.418397       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0725 20:06:55.423165       1 controller.go:611] quota admission added evaluator for: endpoints
	I0725 20:07:29.771224       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-controller-manager [0cc997adfa5a] <==
	* 
	* 
	* ==> kube-controller-manager [fe61a4c6f462] <==
	* I0725 20:07:29.818644       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0725 20:07:29.818710       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0725 20:07:29.820003       1 shared_informer.go:262] Caches are synced for crt configmap
	I0725 20:07:29.821299       1 shared_informer.go:262] Caches are synced for service account
	I0725 20:07:29.825722       1 shared_informer.go:262] Caches are synced for deployment
	I0725 20:07:29.827160       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0725 20:07:29.830877       1 shared_informer.go:262] Caches are synced for node
	I0725 20:07:29.830920       1 range_allocator.go:173] Starting range CIDR allocator
	I0725 20:07:29.830925       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0725 20:07:29.830932       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0725 20:07:29.837982       1 shared_informer.go:262] Caches are synced for expand
	I0725 20:07:29.843062       1 shared_informer.go:262] Caches are synced for TTL
	I0725 20:07:29.844568       1 shared_informer.go:262] Caches are synced for ephemeral
	I0725 20:07:29.909776       1 shared_informer.go:262] Caches are synced for endpoint
	I0725 20:07:29.912508       1 shared_informer.go:262] Caches are synced for cronjob
	I0725 20:07:29.924523       1 shared_informer.go:262] Caches are synced for job
	I0725 20:07:29.942019       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0725 20:07:29.963854       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0725 20:07:30.014507       1 shared_informer.go:262] Caches are synced for disruption
	I0725 20:07:30.014573       1 disruption.go:371] Sending events to api server.
	I0725 20:07:30.050338       1 shared_informer.go:262] Caches are synced for resource quota
	I0725 20:07:30.086310       1 shared_informer.go:262] Caches are synced for resource quota
	I0725 20:07:30.464189       1 shared_informer.go:262] Caches are synced for garbage collector
	I0725 20:07:30.534593       1 shared_informer.go:262] Caches are synced for garbage collector
	I0725 20:07:30.534660       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-proxy [86587cb729ef] <==
	* E0725 20:06:32.712220       1 node.go:152] Failed to retrieve node info: nodes "pause-20220725130540-44543" is forbidden: User "system:serviceaccount:kube-system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
	I0725 20:06:33.734370       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0725 20:06:33.734835       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0725 20:06:33.734875       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0725 20:06:33.808228       1 server_others.go:206] "Using iptables Proxier"
	I0725 20:06:33.808286       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0725 20:06:33.808298       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0725 20:06:33.808310       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0725 20:06:33.808341       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0725 20:06:33.808590       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0725 20:06:33.810258       1 server.go:661] "Version info" version="v1.24.2"
	I0725 20:06:33.810269       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 20:06:33.811089       1 config.go:444] "Starting node config controller"
	I0725 20:06:33.811100       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0725 20:06:33.811311       1 config.go:226] "Starting endpoint slice config controller"
	I0725 20:06:33.811323       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0725 20:06:33.816821       1 config.go:317] "Starting service config controller"
	I0725 20:06:33.816851       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0725 20:06:33.912050       1 shared_informer.go:262] Caches are synced for node config
	I0725 20:06:33.917507       1 shared_informer.go:262] Caches are synced for service config
	I0725 20:06:33.917561       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [d8bc4acc6c13] <==
	* E0725 20:06:49.830016       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-20220725130540-44543": dial tcp 192.168.67.2:8443: connect: connection refused
	I0725 20:06:52.933519       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0725 20:06:52.933563       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0725 20:06:52.933585       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0725 20:06:53.028122       1 server_others.go:206] "Using iptables Proxier"
	I0725 20:06:53.028179       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0725 20:06:53.028189       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0725 20:06:53.028198       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0725 20:06:53.028219       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0725 20:06:53.028321       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0725 20:06:53.028455       1 server.go:661] "Version info" version="v1.24.2"
	I0725 20:06:53.028504       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 20:06:53.029869       1 config.go:317] "Starting service config controller"
	I0725 20:06:53.029902       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0725 20:06:53.029937       1 config.go:226] "Starting endpoint slice config controller"
	I0725 20:06:53.029942       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0725 20:06:53.030231       1 config.go:444] "Starting node config controller"
	I0725 20:06:53.030276       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0725 20:06:53.130310       1 shared_informer.go:262] Caches are synced for node config
	I0725 20:06:53.130351       1 shared_informer.go:262] Caches are synced for service config
	I0725 20:06:53.130369       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [47602eb09ff1] <==
	* W0725 20:06:04.478494       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0725 20:06:04.479046       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0725 20:06:04.478529       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0725 20:06:04.479055       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0725 20:06:04.478642       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0725 20:06:04.478715       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0725 20:06:04.478867       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0725 20:06:04.479145       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0725 20:06:04.479156       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0725 20:06:05.427476       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0725 20:06:05.427527       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0725 20:06:05.451608       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0725 20:06:05.451642       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0725 20:06:05.554104       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0725 20:06:05.554142       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0725 20:06:05.559539       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0725 20:06:05.559575       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0725 20:06:05.560014       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0725 20:06:05.560046       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0725 20:06:05.578696       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0725 20:06:05.578732       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0725 20:06:05.974737       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0725 20:06:28.030629       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0725 20:06:28.031068       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0725 20:06:28.031138       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	* 
	* ==> kube-scheduler [d2f7f3f647dd] <==
	* I0725 20:06:50.476224       1 serving.go:348] Generated self-signed cert in-memory
	W0725 20:06:52.918451       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0725 20:06:52.918609       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0725 20:06:52.918618       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0725 20:06:52.918623       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0725 20:06:52.936054       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.2"
	I0725 20:06:52.936164       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 20:06:52.942170       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0725 20:06:52.942410       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0725 20:06:52.942459       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0725 20:06:52.942465       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0725 20:06:53.042678       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2022-07-25 20:05:47 UTC, end at Mon 2022-07-25 20:08:19 UTC. --
	Jul 25 20:06:52 pause-20220725130540-44543 kubelet[1932]: I0725 20:06:52.240695    1932 scope.go:110] "RemoveContainer" containerID="70028ce4d567e666829d21ea367c9f54206daaead11d0ba1395aade6d1a038dd"
	Jul 25 20:06:52 pause-20220725130540-44543 kubelet[1932]: E0725 20:06:52.240798    1932 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-pause-20220725130540-44543_kube-system(bc7aa17b074ee747df4a1fc7ec5fb0e4)\"" pod="kube-system/kube-controller-manager-pause-20220725130540-44543" podUID=bc7aa17b074ee747df4a1fc7ec5fb0e4
	Jul 25 20:06:52 pause-20220725130540-44543 kubelet[1932]: E0725 20:06:52.240866    1932 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-6d4b75cb6d-br9k9_kube-system(5c3e832f-1dc1-4da5-8aa3-bcee867a6c5d)\"" pod="kube-system/coredns-6d4b75cb6d-br9k9" podUID=5c3e832f-1dc1-4da5-8aa3-bcee867a6c5d
	Jul 25 20:06:52 pause-20220725130540-44543 kubelet[1932]: E0725 20:06:52.919875    1932 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Jul 25 20:06:52 pause-20220725130540-44543 kubelet[1932]: E0725 20:06:52.919971    1932 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Jul 25 20:06:52 pause-20220725130540-44543 kubelet[1932]: E0725 20:06:52.920868    1932 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Jul 25 20:06:53 pause-20220725130540-44543 kubelet[1932]: I0725 20:06:53.246979    1932 scope.go:110] "RemoveContainer" containerID="0cc997adfa5a8c6922e1acb057550219b7c67f926c255513cba0892c0fcbb01f"
	Jul 25 20:06:53 pause-20220725130540-44543 kubelet[1932]: E0725 20:06:53.247264    1932 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-pause-20220725130540-44543_kube-system(bc7aa17b074ee747df4a1fc7ec5fb0e4)\"" pod="kube-system/kube-controller-manager-pause-20220725130540-44543" podUID=bc7aa17b074ee747df4a1fc7ec5fb0e4
	Jul 25 20:06:54 pause-20220725130540-44543 kubelet[1932]: I0725 20:06:54.253439    1932 scope.go:110] "RemoveContainer" containerID="0cc997adfa5a8c6922e1acb057550219b7c67f926c255513cba0892c0fcbb01f"
	Jul 25 20:06:54 pause-20220725130540-44543 kubelet[1932]: E0725 20:06:54.253734    1932 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-pause-20220725130540-44543_kube-system(bc7aa17b074ee747df4a1fc7ec5fb0e4)\"" pod="kube-system/kube-controller-manager-pause-20220725130540-44543" podUID=bc7aa17b074ee747df4a1fc7ec5fb0e4
	Jul 25 20:06:55 pause-20220725130540-44543 kubelet[1932]: I0725 20:06:55.796181    1932 topology_manager.go:200] "Topology Admit Handler"
	Jul 25 20:06:55 pause-20220725130540-44543 kubelet[1932]: E0725 20:06:55.796255    1932 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="5311e8b0-bbbd-426d-9e32-33acd56d88cc" containerName="coredns"
	Jul 25 20:06:55 pause-20220725130540-44543 kubelet[1932]: I0725 20:06:55.796276    1932 memory_manager.go:345] "RemoveStaleState removing state" podUID="5311e8b0-bbbd-426d-9e32-33acd56d88cc" containerName="coredns"
	Jul 25 20:06:55 pause-20220725130540-44543 kubelet[1932]: I0725 20:06:55.960606    1932 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw6d7\" (UniqueName: \"kubernetes.io/projected/f4a1d687-3071-4a86-91d1-2415124aa8c8-kube-api-access-vw6d7\") pod \"storage-provisioner\" (UID: \"f4a1d687-3071-4a86-91d1-2415124aa8c8\") " pod="kube-system/storage-provisioner"
	Jul 25 20:06:55 pause-20220725130540-44543 kubelet[1932]: I0725 20:06:55.960706    1932 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f4a1d687-3071-4a86-91d1-2415124aa8c8-tmp\") pod \"storage-provisioner\" (UID: \"f4a1d687-3071-4a86-91d1-2415124aa8c8\") " pod="kube-system/storage-provisioner"
	Jul 25 20:07:07 pause-20220725130540-44543 kubelet[1932]: I0725 20:07:07.090186    1932 scope.go:110] "RemoveContainer" containerID="a50823f706f0cc08d6fe3a6aca5498e5126155c0c395a5640b72b03fea8f5472"
	Jul 25 20:07:07 pause-20220725130540-44543 kubelet[1932]: I0725 20:07:07.131263    1932 scope.go:110] "RemoveContainer" containerID="70028ce4d567e666829d21ea367c9f54206daaead11d0ba1395aade6d1a038dd"
	Jul 25 20:07:07 pause-20220725130540-44543 kubelet[1932]: I0725 20:07:07.132051    1932 scope.go:110] "RemoveContainer" containerID="0cc997adfa5a8c6922e1acb057550219b7c67f926c255513cba0892c0fcbb01f"
	Jul 25 20:07:07 pause-20220725130540-44543 kubelet[1932]: E0725 20:07:07.132543    1932 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-pause-20220725130540-44543_kube-system(bc7aa17b074ee747df4a1fc7ec5fb0e4)\"" pod="kube-system/kube-controller-manager-pause-20220725130540-44543" podUID=bc7aa17b074ee747df4a1fc7ec5fb0e4
	Jul 25 20:07:18 pause-20220725130540-44543 kubelet[1932]: I0725 20:07:18.131387    1932 scope.go:110] "RemoveContainer" containerID="0cc997adfa5a8c6922e1acb057550219b7c67f926c255513cba0892c0fcbb01f"
	Jul 25 20:07:33 pause-20220725130540-44543 kubelet[1932]: I0725 20:07:33.978035    1932 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Jul 25 20:07:33 pause-20220725130540-44543 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Jul 25 20:07:34 pause-20220725130540-44543 systemd[1]: kubelet.service: Succeeded.
	Jul 25 20:07:34 pause-20220725130540-44543 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 25 20:07:34 pause-20220725130540-44543 systemd[1]: kubelet.service: Consumed 2.433s CPU time.
	
	* 
	* ==> storage-provisioner [452fed047b98] <==
	* I0725 20:06:56.292818       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0725 20:06:56.300713       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0725 20:06:56.300759       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0725 20:06:56.308946       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0725 20:06:56.309105       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20220725130540-44543_4147f867-8260-482f-b951-2cd5e3c8a8af!
	I0725 20:06:56.309536       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d5b0d032-b797-4172-b472-b5371dd5a36c", APIVersion:"v1", ResourceVersion:"423", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20220725130540-44543_4147f867-8260-482f-b951-2cd5e3c8a8af became leader
	I0725 20:06:56.409868       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20220725130540-44543_4147f867-8260-482f-b951-2cd5e3c8a8af!
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 13:08:17.931238   57068 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p pause-20220725130540-44543 -n pause-20220725130540-44543
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p pause-20220725130540-44543 -n pause-20220725130540-44543: exit status 2 (16.129312019s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "pause-20220725130540-44543" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestPause/serial/VerifyStatus (61.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (250.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-20220725131610-44543 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0725 13:16:27.482632   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/addons-20220725121918-44543/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-20220725131610-44543 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (4m9.423152584s)

                                                
                                                
-- stdout --
	* [old-k8s-version-20220725131610-44543] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14555
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node old-k8s-version-20220725131610-44543 in cluster old-k8s-version-20220725131610-44543
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.17 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 13:16:10.818819   59354 out.go:296] Setting OutFile to fd 1 ...
	I0725 13:16:10.819029   59354 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 13:16:10.819035   59354 out.go:309] Setting ErrFile to fd 2...
	I0725 13:16:10.819039   59354 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 13:16:10.819148   59354 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/bin
	I0725 13:16:10.819669   59354 out.go:303] Setting JSON to false
	I0725 13:16:10.835202   59354 start.go:115] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":15342,"bootTime":1658764828,"procs":353,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0725 13:16:10.835328   59354 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0725 13:16:10.877549   59354 out.go:177] * [old-k8s-version-20220725131610-44543] minikube v1.26.0 on Darwin 12.4
	I0725 13:16:10.898794   59354 notify.go:193] Checking for updates...
	I0725 13:16:10.935556   59354 out.go:177]   - MINIKUBE_LOCATION=14555
	I0725 13:16:11.009530   59354 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	I0725 13:16:11.067497   59354 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 13:16:11.141547   59354 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 13:16:11.199621   59354 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube
	I0725 13:16:11.237752   59354 config.go:178] Loaded profile config "kubenet-20220725125922-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0725 13:16:11.237867   59354 driver.go:365] Setting default libvirt URI to qemu:///system
	I0725 13:16:11.312147   59354 docker.go:137] docker version: linux-20.10.17
	I0725 13:16:11.312315   59354 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 13:16:11.457763   59354 info.go:265] docker info: {ID:DEPZ:X5EQ:JUVI:7TAY:IXPP:M3QT:KGJK:NK3N:YSWM:HAHS:YLUF:RBE6 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 20:16:11.387275843 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 13:16:11.518495   59354 out.go:177] * Using the docker driver based on user configuration
	I0725 13:16:11.539823   59354 start.go:284] selected driver: docker
	I0725 13:16:11.539860   59354 start.go:808] validating driver "docker" against <nil>
	I0725 13:16:11.539896   59354 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 13:16:11.543434   59354 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 13:16:11.677930   59354 info.go:265] docker info: {ID:DEPZ:X5EQ:JUVI:7TAY:IXPP:M3QT:KGJK:NK3N:YSWM:HAHS:YLUF:RBE6 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 20:16:11.611918441 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 13:16:11.678058   59354 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0725 13:16:11.678207   59354 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 13:16:11.699440   59354 out.go:177] * Using Docker Desktop driver with root privileges
	I0725 13:16:11.720979   59354 cni.go:95] Creating CNI manager for ""
	I0725 13:16:11.720999   59354 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 13:16:11.721014   59354 start_flags.go:310] config:
	{Name:old-k8s-version-20220725131610-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220725131610-44543 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 13:16:11.742338   59354 out.go:177] * Starting control plane node old-k8s-version-20220725131610-44543 in cluster old-k8s-version-20220725131610-44543
	I0725 13:16:11.784279   59354 cache.go:120] Beginning downloading kic base image for docker with docker
	I0725 13:16:11.806037   59354 out.go:177] * Pulling base image ...
	I0725 13:16:11.848044   59354 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0725 13:16:11.848095   59354 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0725 13:16:11.848121   59354 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0725 13:16:11.848147   59354 cache.go:57] Caching tarball of preloaded images
	I0725 13:16:11.848326   59354 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0725 13:16:11.848347   59354 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0725 13:16:11.849187   59354 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/config.json ...
	I0725 13:16:11.849274   59354 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/config.json: {Name:mk5dd4ca22835796a5be7bf683c0fde449b0d077 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 13:16:11.912185   59354 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0725 13:16:11.912207   59354 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0725 13:16:11.912218   59354 cache.go:208] Successfully downloaded all kic artifacts
	I0725 13:16:11.912264   59354 start.go:370] acquiring machines lock for old-k8s-version-20220725131610-44543: {Name:mka786150aa94c7510878ab5519b8cf30abe9378 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 13:16:11.912411   59354 start.go:374] acquired machines lock for "old-k8s-version-20220725131610-44543" in 136.14µs
	I0725 13:16:11.912438   59354 start.go:92] Provisioning new machine with config: &{Name:old-k8s-version-20220725131610-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220725131610-44543
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 13:16:11.912542   59354 start.go:132] createHost starting for "" (driver="docker")
	I0725 13:16:11.956229   59354 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0725 13:16:11.956503   59354 start.go:166] libmachine.API.Create for "old-k8s-version-20220725131610-44543" (driver="docker")
	I0725 13:16:11.956539   59354 client.go:168] LocalClient.Create starting
	I0725 13:16:11.956654   59354 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem
	I0725 13:16:11.956701   59354 main.go:134] libmachine: Decoding PEM data...
	I0725 13:16:11.956721   59354 main.go:134] libmachine: Parsing certificate...
	I0725 13:16:11.956811   59354 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem
	I0725 13:16:11.956853   59354 main.go:134] libmachine: Decoding PEM data...
	I0725 13:16:11.956866   59354 main.go:134] libmachine: Parsing certificate...
	I0725 13:16:11.957394   59354 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220725131610-44543 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0725 13:16:12.020705   59354 cli_runner.go:211] docker network inspect old-k8s-version-20220725131610-44543 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0725 13:16:12.020804   59354 network_create.go:272] running [docker network inspect old-k8s-version-20220725131610-44543] to gather additional debugging logs...
	I0725 13:16:12.020828   59354 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220725131610-44543
	W0725 13:16:12.083461   59354 cli_runner.go:211] docker network inspect old-k8s-version-20220725131610-44543 returned with exit code 1
	I0725 13:16:12.083492   59354 network_create.go:275] error running [docker network inspect old-k8s-version-20220725131610-44543]: docker network inspect old-k8s-version-20220725131610-44543: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20220725131610-44543
	I0725 13:16:12.083507   59354 network_create.go:277] output of [docker network inspect old-k8s-version-20220725131610-44543]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20220725131610-44543
	
	** /stderr **
	I0725 13:16:12.083744   59354 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0725 13:16:12.147083   59354 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00000e938] misses:0}
	I0725 13:16:12.147129   59354 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0725 13:16:12.147144   59354 network_create.go:115] attempt to create docker network old-k8s-version-20220725131610-44543 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0725 13:16:12.147210   59354 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220725131610-44543 old-k8s-version-20220725131610-44543
	W0725 13:16:12.210310   59354 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220725131610-44543 old-k8s-version-20220725131610-44543 returned with exit code 1
	W0725 13:16:12.210356   59354 network_create.go:107] failed to create docker network old-k8s-version-20220725131610-44543 192.168.49.0/24, will retry: subnet is taken
	I0725 13:16:12.210606   59354 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00000e938] amended:false}} dirty:map[] misses:0}
	I0725 13:16:12.210623   59354 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0725 13:16:12.210827   59354 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00000e938] amended:true}} dirty:map[192.168.49.0:0xc00000e938 192.168.58.0:0xc00000e970] misses:0}
	I0725 13:16:12.210840   59354 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0725 13:16:12.210855   59354 network_create.go:115] attempt to create docker network old-k8s-version-20220725131610-44543 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0725 13:16:12.210915   59354 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220725131610-44543 old-k8s-version-20220725131610-44543
	W0725 13:16:12.273068   59354 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220725131610-44543 old-k8s-version-20220725131610-44543 returned with exit code 1
	W0725 13:16:12.273108   59354 network_create.go:107] failed to create docker network old-k8s-version-20220725131610-44543 192.168.58.0/24, will retry: subnet is taken
	I0725 13:16:12.273378   59354 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00000e938] amended:true}} dirty:map[192.168.49.0:0xc00000e938 192.168.58.0:0xc00000e970] misses:1}
	I0725 13:16:12.273396   59354 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0725 13:16:12.273604   59354 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00000e938] amended:true}} dirty:map[192.168.49.0:0xc00000e938 192.168.58.0:0xc00000e970 192.168.67.0:0xc000a16240] misses:1}
	I0725 13:16:12.273617   59354 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0725 13:16:12.273624   59354 network_create.go:115] attempt to create docker network old-k8s-version-20220725131610-44543 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0725 13:16:12.273681   59354 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220725131610-44543 old-k8s-version-20220725131610-44543
	I0725 13:16:12.369119   59354 network_create.go:99] docker network old-k8s-version-20220725131610-44543 192.168.67.0/24 created
	I0725 13:16:12.369150   59354 kic.go:106] calculated static IP "192.168.67.2" for the "old-k8s-version-20220725131610-44543" container
	I0725 13:16:12.369232   59354 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0725 13:16:12.435953   59354 cli_runner.go:164] Run: docker volume create old-k8s-version-20220725131610-44543 --label name.minikube.sigs.k8s.io=old-k8s-version-20220725131610-44543 --label created_by.minikube.sigs.k8s.io=true
	I0725 13:16:12.499678   59354 oci.go:103] Successfully created a docker volume old-k8s-version-20220725131610-44543
	I0725 13:16:12.499772   59354 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-20220725131610-44543-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220725131610-44543 --entrypoint /usr/bin/test -v old-k8s-version-20220725131610-44543:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 -d /var/lib
	I0725 13:16:12.972625   59354 oci.go:107] Successfully prepared a docker volume old-k8s-version-20220725131610-44543
	I0725 13:16:12.972680   59354 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0725 13:16:12.972695   59354 kic.go:179] Starting extracting preloaded images to volume ...
	I0725 13:16:12.972789   59354 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20220725131610-44543:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 -I lz4 -xf /preloaded.tar -C /extractDir
	I0725 13:16:17.034793   59354 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20220725131610-44543:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 -I lz4 -xf /preloaded.tar -C /extractDir: (4.061843182s)
	I0725 13:16:17.034814   59354 kic.go:188] duration metric: took 4.062039 seconds to extract preloaded images to volume
	I0725 13:16:17.034916   59354 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0725 13:16:17.168186   59354 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-20220725131610-44543 --name old-k8s-version-20220725131610-44543 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220725131610-44543 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-20220725131610-44543 --network old-k8s-version-20220725131610-44543 --ip 192.168.67.2 --volume old-k8s-version-20220725131610-44543:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842
	I0725 13:16:17.545162   59354 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220725131610-44543 --format={{.State.Running}}
	I0725 13:16:17.619842   59354 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220725131610-44543 --format={{.State.Status}}
	I0725 13:16:17.707610   59354 cli_runner.go:164] Run: docker exec old-k8s-version-20220725131610-44543 stat /var/lib/dpkg/alternatives/iptables
	I0725 13:16:17.846679   59354 oci.go:144] the created container "old-k8s-version-20220725131610-44543" has a running status.
	I0725 13:16:17.846714   59354 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/old-k8s-version-20220725131610-44543/id_rsa...
	I0725 13:16:17.918431   59354 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/old-k8s-version-20220725131610-44543/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0725 13:16:18.033474   59354 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220725131610-44543 --format={{.State.Status}}
	I0725 13:16:18.103425   59354 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0725 13:16:18.103443   59354 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-20220725131610-44543 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0725 13:16:18.229368   59354 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220725131610-44543 --format={{.State.Status}}
	I0725 13:16:18.299602   59354 machine.go:88] provisioning docker machine ...
	I0725 13:16:18.299730   59354 ubuntu.go:169] provisioning hostname "old-k8s-version-20220725131610-44543"
	I0725 13:16:18.299843   59354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725131610-44543
	I0725 13:16:18.369714   59354 main.go:134] libmachine: Using SSH client type: native
	I0725 13:16:18.369934   59354 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 58398 <nil> <nil>}
	I0725 13:16:18.369948   59354 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220725131610-44543 && echo "old-k8s-version-20220725131610-44543" | sudo tee /etc/hostname
	I0725 13:16:18.496082   59354 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220725131610-44543
	
	I0725 13:16:18.496163   59354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725131610-44543
	I0725 13:16:18.569514   59354 main.go:134] libmachine: Using SSH client type: native
	I0725 13:16:18.569670   59354 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 58398 <nil> <nil>}
	I0725 13:16:18.569688   59354 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220725131610-44543' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220725131610-44543/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220725131610-44543' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 13:16:18.692426   59354 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 13:16:18.692446   59354 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube}
	I0725 13:16:18.692486   59354 ubuntu.go:177] setting up certificates
	I0725 13:16:18.692496   59354 provision.go:83] configureAuth start
	I0725 13:16:18.692562   59354 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220725131610-44543
	I0725 13:16:18.762777   59354 provision.go:138] copyHostCerts
	I0725 13:16:18.762865   59354 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem, removing ...
	I0725 13:16:18.762875   59354 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem
	I0725 13:16:18.762972   59354 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem (1082 bytes)
	I0725 13:16:18.763164   59354 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem, removing ...
	I0725 13:16:18.763197   59354 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem
	I0725 13:16:18.763263   59354 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem (1123 bytes)
	I0725 13:16:18.763408   59354 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem, removing ...
	I0725 13:16:18.763414   59354 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem
	I0725 13:16:18.763467   59354 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem (1675 bytes)
	I0725 13:16:18.763574   59354 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220725131610-44543 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220725131610-44543]
	I0725 13:16:18.821561   59354 provision.go:172] copyRemoteCerts
	I0725 13:16:18.821610   59354 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 13:16:18.821656   59354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725131610-44543
	I0725 13:16:18.893189   59354 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58398 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/old-k8s-version-20220725131610-44543/id_rsa Username:docker}
	I0725 13:16:18.979311   59354 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0725 13:16:18.999424   59354 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
	I0725 13:16:19.016818   59354 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 13:16:19.034825   59354 provision.go:86] duration metric: configureAuth took 342.30729ms
	I0725 13:16:19.034842   59354 ubuntu.go:193] setting minikube options for container-runtime
	I0725 13:16:19.034976   59354 config.go:178] Loaded profile config "old-k8s-version-20220725131610-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0725 13:16:19.035029   59354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725131610-44543
	I0725 13:16:19.105602   59354 main.go:134] libmachine: Using SSH client type: native
	I0725 13:16:19.105742   59354 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 58398 <nil> <nil>}
	I0725 13:16:19.105756   59354 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 13:16:19.225612   59354 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0725 13:16:19.225632   59354 ubuntu.go:71] root file system type: overlay
	I0725 13:16:19.225813   59354 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 13:16:19.225886   59354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725131610-44543
	I0725 13:16:19.296002   59354 main.go:134] libmachine: Using SSH client type: native
	I0725 13:16:19.296163   59354 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 58398 <nil> <nil>}
	I0725 13:16:19.296211   59354 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 13:16:19.425869   59354 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 13:16:19.425943   59354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725131610-44543
	I0725 13:16:19.496615   59354 main.go:134] libmachine: Using SSH client type: native
	I0725 13:16:19.496772   59354 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 58398 <nil> <nil>}
	I0725 13:16:19.496786   59354 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 13:16:20.096254   59354 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-06-06 23:01:03.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-07-25 20:16:19.439112633 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0725 13:16:20.096280   59354 machine.go:91] provisioned docker machine in 1.796546235s
	I0725 13:16:20.096287   59354 client.go:171] LocalClient.Create took 8.139580327s
	I0725 13:16:20.096302   59354 start.go:174] duration metric: libmachine.API.Create for "old-k8s-version-20220725131610-44543" took 8.139638135s
	I0725 13:16:20.096312   59354 start.go:307] post-start starting for "old-k8s-version-20220725131610-44543" (driver="docker")
	I0725 13:16:20.096317   59354 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 13:16:20.096384   59354 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 13:16:20.096433   59354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725131610-44543
	I0725 13:16:20.170020   59354 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58398 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/old-k8s-version-20220725131610-44543/id_rsa Username:docker}
	I0725 13:16:20.256257   59354 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 13:16:20.259856   59354 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0725 13:16:20.259874   59354 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0725 13:16:20.259881   59354 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0725 13:16:20.259887   59354 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0725 13:16:20.259896   59354 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/addons for local assets ...
	I0725 13:16:20.260008   59354 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files for local assets ...
	I0725 13:16:20.260158   59354 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem -> 445432.pem in /etc/ssl/certs
	I0725 13:16:20.260310   59354 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 13:16:20.267295   59354 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem --> /etc/ssl/certs/445432.pem (1708 bytes)
	I0725 13:16:20.285830   59354 start.go:310] post-start completed in 189.50552ms
	I0725 13:16:20.286359   59354 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220725131610-44543
	I0725 13:16:20.356448   59354 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/config.json ...
	I0725 13:16:20.356866   59354 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 13:16:20.356912   59354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725131610-44543
	I0725 13:16:20.426850   59354 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58398 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/old-k8s-version-20220725131610-44543/id_rsa Username:docker}
	I0725 13:16:20.509175   59354 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0725 13:16:20.513936   59354 start.go:135] duration metric: createHost completed in 8.601214073s
	I0725 13:16:20.513951   59354 start.go:82] releasing machines lock for "old-k8s-version-20220725131610-44543", held for 8.601361457s
	I0725 13:16:20.514040   59354 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220725131610-44543
	I0725 13:16:20.584384   59354 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0725 13:16:20.584386   59354 ssh_runner.go:195] Run: systemctl --version
	I0725 13:16:20.584461   59354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725131610-44543
	I0725 13:16:20.584491   59354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725131610-44543
	I0725 13:16:20.662571   59354 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58398 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/old-k8s-version-20220725131610-44543/id_rsa Username:docker}
	I0725 13:16:20.664645   59354 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58398 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/old-k8s-version-20220725131610-44543/id_rsa Username:docker}
	I0725 13:16:20.876936   59354 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 13:16:20.886914   59354 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0725 13:16:20.886979   59354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 13:16:20.895823   59354 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 13:16:20.908853   59354 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 13:16:20.982597   59354 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 13:16:21.056079   59354 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 13:16:21.129557   59354 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 13:16:21.322769   59354 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 13:16:21.357268   59354 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 13:16:21.417798   59354 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.17 ...
	I0725 13:16:21.417911   59354 cli_runner.go:164] Run: docker exec -t old-k8s-version-20220725131610-44543 dig +short host.docker.internal
	I0725 13:16:21.548017   59354 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0725 13:16:21.548113   59354 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0725 13:16:21.552218   59354 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 13:16:21.561546   59354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220725131610-44543
	I0725 13:16:21.633922   59354 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0725 13:16:21.633992   59354 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 13:16:21.662633   59354 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0725 13:16:21.662651   59354 docker.go:542] Images already preloaded, skipping extraction
	I0725 13:16:21.662716   59354 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 13:16:21.693049   59354 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0725 13:16:21.693067   59354 cache_images.go:84] Images are preloaded, skipping loading
	I0725 13:16:21.693136   59354 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0725 13:16:21.765549   59354 cni.go:95] Creating CNI manager for ""
	I0725 13:16:21.765561   59354 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 13:16:21.765575   59354 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0725 13:16:21.765592   59354 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220725131610-44543 NodeName:old-k8s-version-20220725131610-44543 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:systemd Clien
tCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0725 13:16:21.765697   59354 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-20220725131610-44543"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20220725131610-44543
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 13:16:21.765770   59354 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-20220725131610-44543 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220725131610-44543 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0725 13:16:21.765828   59354 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0725 13:16:21.773382   59354 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 13:16:21.773451   59354 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 13:16:21.783746   59354 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0725 13:16:21.796778   59354 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 13:16:21.809701   59354 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0725 13:16:21.822419   59354 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0725 13:16:21.826080   59354 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 13:16:21.835312   59354 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543 for IP: 192.168.67.2
	I0725 13:16:21.835417   59354 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.key
	I0725 13:16:21.835466   59354 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.key
	I0725 13:16:21.835506   59354 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/client.key
	I0725 13:16:21.835518   59354 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/client.crt with IP's: []
	I0725 13:16:21.996349   59354 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/client.crt ...
	I0725 13:16:21.996369   59354 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/client.crt: {Name:mk347defbaabab765ff15869c58230699290eb25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 13:16:21.996697   59354 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/client.key ...
	I0725 13:16:21.996707   59354 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/client.key: {Name:mkb395df645cc095773da1b287b1c21a4552d7fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 13:16:21.996896   59354 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/apiserver.key.c7fa3a9e
	I0725 13:16:21.996911   59354 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0725 13:16:22.097815   59354 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/apiserver.crt.c7fa3a9e ...
	I0725 13:16:22.097827   59354 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/apiserver.crt.c7fa3a9e: {Name:mkf21576f3efdeaf6b64beff9be386ebf9fbee4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 13:16:22.098097   59354 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/apiserver.key.c7fa3a9e ...
	I0725 13:16:22.098104   59354 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/apiserver.key.c7fa3a9e: {Name:mk713893cc97f79ee1bc5563cf471153e23bc938 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 13:16:22.098303   59354 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/apiserver.crt.c7fa3a9e -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/apiserver.crt
	I0725 13:16:22.098452   59354 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/apiserver.key.c7fa3a9e -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/apiserver.key
	I0725 13:16:22.098597   59354 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/proxy-client.key
	I0725 13:16:22.098610   59354 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/proxy-client.crt with IP's: []
	I0725 13:16:22.272427   59354 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/proxy-client.crt ...
	I0725 13:16:22.272442   59354 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/proxy-client.crt: {Name:mk688b3af8b2ab195d9d01194633b7d169d08d43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 13:16:22.272716   59354 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/proxy-client.key ...
	I0725 13:16:22.272724   59354 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/proxy-client.key: {Name:mk2429eeb2f3d99144be9c53cc2a73dbca848e00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 13:16:22.273097   59354 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543.pem (1338 bytes)
	W0725 13:16:22.273138   59354 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543_empty.pem, impossibly tiny 0 bytes
	I0725 13:16:22.273148   59354 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 13:16:22.273179   59354 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem (1082 bytes)
	I0725 13:16:22.273209   59354 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem (1123 bytes)
	I0725 13:16:22.273237   59354 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem (1675 bytes)
	I0725 13:16:22.273299   59354 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem (1708 bytes)
	I0725 13:16:22.273751   59354 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0725 13:16:22.291090   59354 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0725 13:16:22.307742   59354 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 13:16:22.324438   59354 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 13:16:22.341430   59354 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 13:16:22.357886   59354 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 13:16:22.374853   59354 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 13:16:22.391582   59354 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0725 13:16:22.408992   59354 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543.pem --> /usr/share/ca-certificates/44543.pem (1338 bytes)
	I0725 13:16:22.425960   59354 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem --> /usr/share/ca-certificates/445432.pem (1708 bytes)
	I0725 13:16:22.443026   59354 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 13:16:22.459576   59354 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 13:16:22.472806   59354 ssh_runner.go:195] Run: openssl version
	I0725 13:16:22.478127   59354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44543.pem && ln -fs /usr/share/ca-certificates/44543.pem /etc/ssl/certs/44543.pem"
	I0725 13:16:22.488024   59354 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44543.pem
	I0725 13:16:22.491976   59354 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 25 19:24 /usr/share/ca-certificates/44543.pem
	I0725 13:16:22.492030   59354 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44543.pem
	I0725 13:16:22.497517   59354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/44543.pem /etc/ssl/certs/51391683.0"
	I0725 13:16:22.506305   59354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/445432.pem && ln -fs /usr/share/ca-certificates/445432.pem /etc/ssl/certs/445432.pem"
	I0725 13:16:22.515504   59354 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/445432.pem
	I0725 13:16:22.519342   59354 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 25 19:24 /usr/share/ca-certificates/445432.pem
	I0725 13:16:22.519390   59354 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/445432.pem
	I0725 13:16:22.524978   59354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/445432.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 13:16:22.532651   59354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 13:16:22.540339   59354 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 13:16:22.544256   59354 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 25 19:19 /usr/share/ca-certificates/minikubeCA.pem
	I0725 13:16:22.544315   59354 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 13:16:22.549466   59354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 13:16:22.557063   59354 kubeadm.go:395] StartCluster: {Name:old-k8s-version-20220725131610-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220725131610-44543 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 13:16:22.557154   59354 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 13:16:22.588471   59354 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 13:16:22.595996   59354 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 13:16:22.603095   59354 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0725 13:16:22.603145   59354 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 13:16:22.610377   59354 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 13:16:22.610395   59354 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0725 13:16:23.370683   59354 out.go:204]   - Generating certificates and keys ...
	I0725 13:16:25.465734   59354 out.go:204]   - Booting up control plane ...
	W0725 13:18:20.393115   59354 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-20220725131610-44543 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-20220725131610-44543 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-20220725131610-44543 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-20220725131610-44543 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0725 13:18:20.393149   59354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0725 13:18:20.813591   59354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 13:18:20.823227   59354 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0725 13:18:20.843772   59354 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 13:18:20.852711   59354 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 13:18:20.852740   59354 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0725 13:18:21.594523   59354 out.go:204]   - Generating certificates and keys ...
	I0725 13:18:22.621226   59354 out.go:204]   - Booting up control plane ...
	I0725 13:20:17.560862   59354 kubeadm.go:397] StartCluster complete in 3m54.97595343s
	I0725 13:20:17.560936   59354 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:20:17.589631   59354 logs.go:274] 0 containers: []
	W0725 13:20:17.589643   59354 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:20:17.589702   59354 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:20:17.618139   59354 logs.go:274] 0 containers: []
	W0725 13:20:17.618154   59354 logs.go:276] No container was found matching "etcd"
	I0725 13:20:17.618211   59354 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:20:17.647445   59354 logs.go:274] 0 containers: []
	W0725 13:20:17.647457   59354 logs.go:276] No container was found matching "coredns"
	I0725 13:20:17.647510   59354 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:20:17.675695   59354 logs.go:274] 0 containers: []
	W0725 13:20:17.675709   59354 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:20:17.675767   59354 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:20:17.704369   59354 logs.go:274] 0 containers: []
	W0725 13:20:17.704382   59354 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:20:17.704436   59354 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:20:17.736593   59354 logs.go:274] 0 containers: []
	W0725 13:20:17.736605   59354 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:20:17.736666   59354 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:20:17.767457   59354 logs.go:274] 0 containers: []
	W0725 13:20:17.767470   59354 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:20:17.767530   59354 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:20:17.796619   59354 logs.go:274] 0 containers: []
	W0725 13:20:17.796632   59354 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:20:17.796639   59354 logs.go:123] Gathering logs for kubelet ...
	I0725 13:20:17.796646   59354 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:20:17.836946   59354 logs.go:123] Gathering logs for dmesg ...
	I0725 13:20:17.836960   59354 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:20:17.848210   59354 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:20:17.848224   59354 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:20:17.905820   59354 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:20:17.905831   59354 logs.go:123] Gathering logs for Docker ...
	I0725 13:20:17.905838   59354 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:20:17.921877   59354 logs.go:123] Gathering logs for container status ...
	I0725 13:20:17.921890   59354 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:20:19.975556   59354 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053584622s)
	W0725 13:20:19.975689   59354 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0725 13:20:19.975704   59354 out.go:239] * 
	* 
	W0725 13:20:19.975818   59354 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0725 13:20:19.975832   59354 out.go:239] * 
	* 
	W0725 13:20:19.976335   59354 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 13:20:20.040175   59354 out.go:177] 
	W0725 13:20:20.082556   59354 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0725 13:20:20.082723   59354 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0725 13:20:20.082877   59354 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0725 13:20:20.146355   59354 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p old-k8s-version-20220725131610-44543 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220725131610-44543
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220725131610-44543:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6935d4927a39f4ec4845798aff773e95bc8ae838958e9958b965359cf4e99f8c",
	        "Created": "2022-07-25T20:16:17.246440867Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 230890,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-25T20:16:17.551184518Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/6935d4927a39f4ec4845798aff773e95bc8ae838958e9958b965359cf4e99f8c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6935d4927a39f4ec4845798aff773e95bc8ae838958e9958b965359cf4e99f8c/hostname",
	        "HostsPath": "/var/lib/docker/containers/6935d4927a39f4ec4845798aff773e95bc8ae838958e9958b965359cf4e99f8c/hosts",
	        "LogPath": "/var/lib/docker/containers/6935d4927a39f4ec4845798aff773e95bc8ae838958e9958b965359cf4e99f8c/6935d4927a39f4ec4845798aff773e95bc8ae838958e9958b965359cf4e99f8c-json.log",
	        "Name": "/old-k8s-version-20220725131610-44543",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220725131610-44543:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220725131610-44543",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f15eeb93498eb9b562fa0404f6a7af49ce3b2ca697ba1b4c09698c8f0f5924f2-init/diff:/var/lib/docker/overlay2/d54b85ebfe416686b2558e2807bd5eef62d8c3964c35bbc22b4965093a7c57f4/diff:/var/lib/docker/overlay2/5cb3ff6e7921802d23fd6cba2490df2c36857ad3d97eedcf6e537d1527a37f60/diff:/var/lib/docker/overlay2/e9074b1b2294cc5a00d3237646a71c4d30d89748eadbae26dd532d6a475c9851/diff:/var/lib/docker/overlay2/df8929c4859ecf2c8807040b0174a3f0dfa8da01532ea961cd7f4f2f0ccfbfc2/diff:/var/lib/docker/overlay2/177329168f9a1043ce94a2b7b32958779e8f685e85abf8e4c335d03d23295b26/diff:/var/lib/docker/overlay2/9eef19133d2d5e0111523234481658c1f73d7552db59842ad9ca687debda8fb3/diff:/var/lib/docker/overlay2/5c3a4cdf915d1e65e3003feefb9b9e649b432658e093e04971d5cb79e610dba9/diff:/var/lib/docker/overlay2/d1aeff9fecf983db994270b4acf2a2310c790c64d42c58d436aec3238f88b36c/diff:/var/lib/docker/overlay2/580dc19745fb3c1352983857ac5a555954893045702e7867e651e15a2d6d8732/diff:/var/lib/docker/overlay2/6c77b3
2028ad3ef74c2faf89b5080529c0391907ee9c1d7bbc00c25a5340238b/diff:/var/lib/docker/overlay2/e9e20374d05015c7a4f19eb8e14e663122cd2aef66d761bedfdef138551230b8/diff:/var/lib/docker/overlay2/cd6d64933d189089a7aaf08d7c7db50b89cc981d03b8272e4fdedb65495c3e4c/diff:/var/lib/docker/overlay2/768244a14ae888bb2a08747182035c24bd2a6a7a67f1ddae7b4e60efccb01339/diff:/var/lib/docker/overlay2/f90a95060678580a1a90ee9a563996249f7606bffcd06b680ef15234b56ab4a6/diff:/var/lib/docker/overlay2/49310159b9be5ac9bfa1dca18d816535ceb12e228963adafcf71f9d7d52d95ec/diff:/var/lib/docker/overlay2/cef5c40c08cddf01fe5ea252f4a28586c0bd1296ddc38fc37fc7e34af83c3c30/diff:/var/lib/docker/overlay2/b31bdcb9b1a69c5230400a4e1f5650c1cc8f9e27e673ff43708f9a450106733e/diff:/var/lib/docker/overlay2/899516301b3ffc21aee7cb9984f97d944fe5af0f2cd0bc5a5895bf187e4bff72/diff:/var/lib/docker/overlay2/f1bd0d2fc2ce458c0b6364dde56610c3d26bf8dc2cca2ae4e34a46f173f44595/diff:/var/lib/docker/overlay2/9f0da14f9c19d5a719554996b34f3ef80a0378921181aaf89ca41339ccc5bee3/diff:/var/lib/d
ocker/overlay2/4d6e8c40f7c65cbcfa827b62273eb37d2a0bb0407ee161b80523f11c157a52d4/diff:/var/lib/docker/overlay2/434355bebe182fb5c97b07ad8cf7a59b5474f8bc71379ff4f9690ffe31837e63/diff:/var/lib/docker/overlay2/128fb199261eb4fea9e79111861223ddb0d84438454cb7c0b9a61cc73d0796c3/diff:/var/lib/docker/overlay2/11c9cb7e5f15ac43115cb739da1135a53e08dc94ee1da27441e1d6c79eb73e62/diff:/var/lib/docker/overlay2/5e6bf225209b025da3db5a2d8862c3a0ee64417fa809d026e010644fc6619b48/diff:/var/lib/docker/overlay2/265a2d12f86abb8a607a06ec6f225186dfe622f7e23cbbf146a2ceffc7bc9bdc/diff:/var/lib/docker/overlay2/e9140b8aea381db0faf833cd2d12ff40267febe63f7c6bc2fa27384dbadf89bc/diff:/var/lib/docker/overlay2/07d7d0ab38cb08c95279e0706a65a6a4aeff8b5e72d559f8bc3dfcc0895654d6/diff:/var/lib/docker/overlay2/3a3d8346fb066863283ad39cf7f43615d7a1e99dc12aa7e8892d85d1c550bc63/diff:/var/lib/docker/overlay2/e5317b212815c832ebd2451ddd6e1f6922f350c5c1651ccfef7c5a8f34b7c1e3/diff:/var/lib/docker/overlay2/3bd7d98f0f0bf7ad2e29344a6ab4ca3211616e390da35077e76565c2074
df852/diff:/var/lib/docker/overlay2/ff63d2045cce1bb51b201079bba9d2b2a666b7331699b9407801b2fc00792bb3/diff:/var/lib/docker/overlay2/545bdf3f77f20e3db68875eda0a8829facb605119a630956ab79323688a3562a/diff:/var/lib/docker/overlay2/8f9b40124ec5ada59184358938542e028ca1e234ef1f6bce1816becf6ec8354e/diff:/var/lib/docker/overlay2/71b919cd2ece4207294cb577adab54fdba87cd9f7fdca1a53d6f376f87f11151/diff:/var/lib/docker/overlay2/b2d4a34fe28bd395d9b4ba0120173fc9df90c36a8a60a309ec088eca8930768c/diff:/var/lib/docker/overlay2/e986f180cbc34391c1ea6665283859b4c3c3061156ea1ff7196ffd869741e591/diff:/var/lib/docker/overlay2/cb98df203d42ea558f11fe84300b687cb65e6f3401b377a4466c9c8ef276ec59/diff:/var/lib/docker/overlay2/6371f9c0528fb1fb4a17bfcf3a34877fdd6d43dca1b5f06aff998d43d2010c46/diff:/var/lib/docker/overlay2/79ceef8e89c4094715c05bafb8c1427d09fb4a99f71f1a6186299303af352c98/diff:/var/lib/docker/overlay2/e034e76eeaff4e68d5158ffe54b55d122124ad82998be242fa08bec3010a0e64/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f15eeb93498eb9b562fa0404f6a7af49ce3b2ca697ba1b4c09698c8f0f5924f2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f15eeb93498eb9b562fa0404f6a7af49ce3b2ca697ba1b4c09698c8f0f5924f2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f15eeb93498eb9b562fa0404f6a7af49ce3b2ca697ba1b4c09698c8f0f5924f2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220725131610-44543",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220725131610-44543/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220725131610-44543",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220725131610-44543",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220725131610-44543",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "12379afd279980558be4a929c4d984061b0c32714de70eee2ad7f4ba5fd48183",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58398"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58399"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58400"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58401"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58397"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/12379afd2799",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220725131610-44543": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6935d4927a39",
	                        "old-k8s-version-20220725131610-44543"
	                    ],
	                    "NetworkID": "c2f2901f9a0d93fa66499c6332491a576318c2a7c67d4d75046d6eea022d9aab",
	                    "EndpointID": "4f78a7ab90e916cef937892224931fa1acc2f254ce858898e285125ccdeb75fc",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220725131610-44543 -n old-k8s-version-20220725131610-44543
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220725131610-44543 -n old-k8s-version-20220725131610-44543: exit status 6 (448.797533ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 13:20:20.818763   60023 status.go:413] kubeconfig endpoint: extract IP: "old-k8s-version-20220725131610-44543" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-20220725131610-44543" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (250.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (54.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220725125922-44543 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0725 13:16:42.040091   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kindnet-20220725125922-44543/client.crt: no such file or directory
E0725 13:16:44.427521   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/addons-20220725121918-44543/client.crt: no such file or directory
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-20220725125922-44543 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.098928146s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0725 13:16:47.161046   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kindnet-20220725125922-44543/client.crt: no such file or directory
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220725125922-44543 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0725 13:16:50.919802   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/auto-20220725125922-44543/client.crt: no such file or directory
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-20220725125922-44543 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.111483267s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220725125922-44543 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0725 13:16:57.402567   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kindnet-20220725125922-44543/client.crt: no such file or directory
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-20220725125922-44543 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.103433668s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220725125922-44543 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-20220725125922-44543 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.09776995s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220725125922-44543 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-20220725125922-44543 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.110743249s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0725 13:17:17.883487   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kindnet-20220725125922-44543/client.crt: no such file or directory
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220725125922-44543 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-20220725125922-44543 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.10922584s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220725125922-44543 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-20220725125922-44543 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.104401498s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:243: failed to connect via pod host: exit status 1
--- FAIL: TestNetworkPlugins/group/kubenet/HairPin (54.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-20220725131610-44543 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220725131610-44543 create -f testdata/busybox.yaml: exit status 1 (29.876987ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-20220725131610-44543" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-20220725131610-44543 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220725131610-44543
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220725131610-44543:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6935d4927a39f4ec4845798aff773e95bc8ae838958e9958b965359cf4e99f8c",
	        "Created": "2022-07-25T20:16:17.246440867Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 230890,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-25T20:16:17.551184518Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/6935d4927a39f4ec4845798aff773e95bc8ae838958e9958b965359cf4e99f8c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6935d4927a39f4ec4845798aff773e95bc8ae838958e9958b965359cf4e99f8c/hostname",
	        "HostsPath": "/var/lib/docker/containers/6935d4927a39f4ec4845798aff773e95bc8ae838958e9958b965359cf4e99f8c/hosts",
	        "LogPath": "/var/lib/docker/containers/6935d4927a39f4ec4845798aff773e95bc8ae838958e9958b965359cf4e99f8c/6935d4927a39f4ec4845798aff773e95bc8ae838958e9958b965359cf4e99f8c-json.log",
	        "Name": "/old-k8s-version-20220725131610-44543",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220725131610-44543:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220725131610-44543",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f15eeb93498eb9b562fa0404f6a7af49ce3b2ca697ba1b4c09698c8f0f5924f2-init/diff:/var/lib/docker/overlay2/d54b85ebfe416686b2558e2807bd5eef62d8c3964c35bbc22b4965093a7c57f4/diff:/var/lib/docker/overlay2/5cb3ff6e7921802d23fd6cba2490df2c36857ad3d97eedcf6e537d1527a37f60/diff:/var/lib/docker/overlay2/e9074b1b2294cc5a00d3237646a71c4d30d89748eadbae26dd532d6a475c9851/diff:/var/lib/docker/overlay2/df8929c4859ecf2c8807040b0174a3f0dfa8da01532ea961cd7f4f2f0ccfbfc2/diff:/var/lib/docker/overlay2/177329168f9a1043ce94a2b7b32958779e8f685e85abf8e4c335d03d23295b26/diff:/var/lib/docker/overlay2/9eef19133d2d5e0111523234481658c1f73d7552db59842ad9ca687debda8fb3/diff:/var/lib/docker/overlay2/5c3a4cdf915d1e65e3003feefb9b9e649b432658e093e04971d5cb79e610dba9/diff:/var/lib/docker/overlay2/d1aeff9fecf983db994270b4acf2a2310c790c64d42c58d436aec3238f88b36c/diff:/var/lib/docker/overlay2/580dc19745fb3c1352983857ac5a555954893045702e7867e651e15a2d6d8732/diff:/var/lib/docker/overlay2/6c77b3
2028ad3ef74c2faf89b5080529c0391907ee9c1d7bbc00c25a5340238b/diff:/var/lib/docker/overlay2/e9e20374d05015c7a4f19eb8e14e663122cd2aef66d761bedfdef138551230b8/diff:/var/lib/docker/overlay2/cd6d64933d189089a7aaf08d7c7db50b89cc981d03b8272e4fdedb65495c3e4c/diff:/var/lib/docker/overlay2/768244a14ae888bb2a08747182035c24bd2a6a7a67f1ddae7b4e60efccb01339/diff:/var/lib/docker/overlay2/f90a95060678580a1a90ee9a563996249f7606bffcd06b680ef15234b56ab4a6/diff:/var/lib/docker/overlay2/49310159b9be5ac9bfa1dca18d816535ceb12e228963adafcf71f9d7d52d95ec/diff:/var/lib/docker/overlay2/cef5c40c08cddf01fe5ea252f4a28586c0bd1296ddc38fc37fc7e34af83c3c30/diff:/var/lib/docker/overlay2/b31bdcb9b1a69c5230400a4e1f5650c1cc8f9e27e673ff43708f9a450106733e/diff:/var/lib/docker/overlay2/899516301b3ffc21aee7cb9984f97d944fe5af0f2cd0bc5a5895bf187e4bff72/diff:/var/lib/docker/overlay2/f1bd0d2fc2ce458c0b6364dde56610c3d26bf8dc2cca2ae4e34a46f173f44595/diff:/var/lib/docker/overlay2/9f0da14f9c19d5a719554996b34f3ef80a0378921181aaf89ca41339ccc5bee3/diff:/var/lib/d
ocker/overlay2/4d6e8c40f7c65cbcfa827b62273eb37d2a0bb0407ee161b80523f11c157a52d4/diff:/var/lib/docker/overlay2/434355bebe182fb5c97b07ad8cf7a59b5474f8bc71379ff4f9690ffe31837e63/diff:/var/lib/docker/overlay2/128fb199261eb4fea9e79111861223ddb0d84438454cb7c0b9a61cc73d0796c3/diff:/var/lib/docker/overlay2/11c9cb7e5f15ac43115cb739da1135a53e08dc94ee1da27441e1d6c79eb73e62/diff:/var/lib/docker/overlay2/5e6bf225209b025da3db5a2d8862c3a0ee64417fa809d026e010644fc6619b48/diff:/var/lib/docker/overlay2/265a2d12f86abb8a607a06ec6f225186dfe622f7e23cbbf146a2ceffc7bc9bdc/diff:/var/lib/docker/overlay2/e9140b8aea381db0faf833cd2d12ff40267febe63f7c6bc2fa27384dbadf89bc/diff:/var/lib/docker/overlay2/07d7d0ab38cb08c95279e0706a65a6a4aeff8b5e72d559f8bc3dfcc0895654d6/diff:/var/lib/docker/overlay2/3a3d8346fb066863283ad39cf7f43615d7a1e99dc12aa7e8892d85d1c550bc63/diff:/var/lib/docker/overlay2/e5317b212815c832ebd2451ddd6e1f6922f350c5c1651ccfef7c5a8f34b7c1e3/diff:/var/lib/docker/overlay2/3bd7d98f0f0bf7ad2e29344a6ab4ca3211616e390da35077e76565c2074
df852/diff:/var/lib/docker/overlay2/ff63d2045cce1bb51b201079bba9d2b2a666b7331699b9407801b2fc00792bb3/diff:/var/lib/docker/overlay2/545bdf3f77f20e3db68875eda0a8829facb605119a630956ab79323688a3562a/diff:/var/lib/docker/overlay2/8f9b40124ec5ada59184358938542e028ca1e234ef1f6bce1816becf6ec8354e/diff:/var/lib/docker/overlay2/71b919cd2ece4207294cb577adab54fdba87cd9f7fdca1a53d6f376f87f11151/diff:/var/lib/docker/overlay2/b2d4a34fe28bd395d9b4ba0120173fc9df90c36a8a60a309ec088eca8930768c/diff:/var/lib/docker/overlay2/e986f180cbc34391c1ea6665283859b4c3c3061156ea1ff7196ffd869741e591/diff:/var/lib/docker/overlay2/cb98df203d42ea558f11fe84300b687cb65e6f3401b377a4466c9c8ef276ec59/diff:/var/lib/docker/overlay2/6371f9c0528fb1fb4a17bfcf3a34877fdd6d43dca1b5f06aff998d43d2010c46/diff:/var/lib/docker/overlay2/79ceef8e89c4094715c05bafb8c1427d09fb4a99f71f1a6186299303af352c98/diff:/var/lib/docker/overlay2/e034e76eeaff4e68d5158ffe54b55d122124ad82998be242fa08bec3010a0e64/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f15eeb93498eb9b562fa0404f6a7af49ce3b2ca697ba1b4c09698c8f0f5924f2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f15eeb93498eb9b562fa0404f6a7af49ce3b2ca697ba1b4c09698c8f0f5924f2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f15eeb93498eb9b562fa0404f6a7af49ce3b2ca697ba1b4c09698c8f0f5924f2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220725131610-44543",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220725131610-44543/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220725131610-44543",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220725131610-44543",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220725131610-44543",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "12379afd279980558be4a929c4d984061b0c32714de70eee2ad7f4ba5fd48183",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58398"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58399"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58400"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58401"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58397"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/12379afd2799",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220725131610-44543": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6935d4927a39",
	                        "old-k8s-version-20220725131610-44543"
	                    ],
	                    "NetworkID": "c2f2901f9a0d93fa66499c6332491a576318c2a7c67d4d75046d6eea022d9aab",
	                    "EndpointID": "4f78a7ab90e916cef937892224931fa1acc2f254ce858898e285125ccdeb75fc",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220725131610-44543 -n old-k8s-version-20220725131610-44543
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220725131610-44543 -n old-k8s-version-20220725131610-44543: exit status 6 (469.032361ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 13:20:21.390931   60038 status.go:413] kubeconfig endpoint: extract IP: "old-k8s-version-20220725131610-44543" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-20220725131610-44543" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220725131610-44543
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220725131610-44543:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6935d4927a39f4ec4845798aff773e95bc8ae838958e9958b965359cf4e99f8c",
	        "Created": "2022-07-25T20:16:17.246440867Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 230890,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-25T20:16:17.551184518Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/6935d4927a39f4ec4845798aff773e95bc8ae838958e9958b965359cf4e99f8c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6935d4927a39f4ec4845798aff773e95bc8ae838958e9958b965359cf4e99f8c/hostname",
	        "HostsPath": "/var/lib/docker/containers/6935d4927a39f4ec4845798aff773e95bc8ae838958e9958b965359cf4e99f8c/hosts",
	        "LogPath": "/var/lib/docker/containers/6935d4927a39f4ec4845798aff773e95bc8ae838958e9958b965359cf4e99f8c/6935d4927a39f4ec4845798aff773e95bc8ae838958e9958b965359cf4e99f8c-json.log",
	        "Name": "/old-k8s-version-20220725131610-44543",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220725131610-44543:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220725131610-44543",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f15eeb93498eb9b562fa0404f6a7af49ce3b2ca697ba1b4c09698c8f0f5924f2-init/diff:/var/lib/docker/overlay2/d54b85ebfe416686b2558e2807bd5eef62d8c3964c35bbc22b4965093a7c57f4/diff:/var/lib/docker/overlay2/5cb3ff6e7921802d23fd6cba2490df2c36857ad3d97eedcf6e537d1527a37f60/diff:/var/lib/docker/overlay2/e9074b1b2294cc5a00d3237646a71c4d30d89748eadbae26dd532d6a475c9851/diff:/var/lib/docker/overlay2/df8929c4859ecf2c8807040b0174a3f0dfa8da01532ea961cd7f4f2f0ccfbfc2/diff:/var/lib/docker/overlay2/177329168f9a1043ce94a2b7b32958779e8f685e85abf8e4c335d03d23295b26/diff:/var/lib/docker/overlay2/9eef19133d2d5e0111523234481658c1f73d7552db59842ad9ca687debda8fb3/diff:/var/lib/docker/overlay2/5c3a4cdf915d1e65e3003feefb9b9e649b432658e093e04971d5cb79e610dba9/diff:/var/lib/docker/overlay2/d1aeff9fecf983db994270b4acf2a2310c790c64d42c58d436aec3238f88b36c/diff:/var/lib/docker/overlay2/580dc19745fb3c1352983857ac5a555954893045702e7867e651e15a2d6d8732/diff:/var/lib/docker/overlay2/6c77b3
2028ad3ef74c2faf89b5080529c0391907ee9c1d7bbc00c25a5340238b/diff:/var/lib/docker/overlay2/e9e20374d05015c7a4f19eb8e14e663122cd2aef66d761bedfdef138551230b8/diff:/var/lib/docker/overlay2/cd6d64933d189089a7aaf08d7c7db50b89cc981d03b8272e4fdedb65495c3e4c/diff:/var/lib/docker/overlay2/768244a14ae888bb2a08747182035c24bd2a6a7a67f1ddae7b4e60efccb01339/diff:/var/lib/docker/overlay2/f90a95060678580a1a90ee9a563996249f7606bffcd06b680ef15234b56ab4a6/diff:/var/lib/docker/overlay2/49310159b9be5ac9bfa1dca18d816535ceb12e228963adafcf71f9d7d52d95ec/diff:/var/lib/docker/overlay2/cef5c40c08cddf01fe5ea252f4a28586c0bd1296ddc38fc37fc7e34af83c3c30/diff:/var/lib/docker/overlay2/b31bdcb9b1a69c5230400a4e1f5650c1cc8f9e27e673ff43708f9a450106733e/diff:/var/lib/docker/overlay2/899516301b3ffc21aee7cb9984f97d944fe5af0f2cd0bc5a5895bf187e4bff72/diff:/var/lib/docker/overlay2/f1bd0d2fc2ce458c0b6364dde56610c3d26bf8dc2cca2ae4e34a46f173f44595/diff:/var/lib/docker/overlay2/9f0da14f9c19d5a719554996b34f3ef80a0378921181aaf89ca41339ccc5bee3/diff:/var/lib/d
ocker/overlay2/4d6e8c40f7c65cbcfa827b62273eb37d2a0bb0407ee161b80523f11c157a52d4/diff:/var/lib/docker/overlay2/434355bebe182fb5c97b07ad8cf7a59b5474f8bc71379ff4f9690ffe31837e63/diff:/var/lib/docker/overlay2/128fb199261eb4fea9e79111861223ddb0d84438454cb7c0b9a61cc73d0796c3/diff:/var/lib/docker/overlay2/11c9cb7e5f15ac43115cb739da1135a53e08dc94ee1da27441e1d6c79eb73e62/diff:/var/lib/docker/overlay2/5e6bf225209b025da3db5a2d8862c3a0ee64417fa809d026e010644fc6619b48/diff:/var/lib/docker/overlay2/265a2d12f86abb8a607a06ec6f225186dfe622f7e23cbbf146a2ceffc7bc9bdc/diff:/var/lib/docker/overlay2/e9140b8aea381db0faf833cd2d12ff40267febe63f7c6bc2fa27384dbadf89bc/diff:/var/lib/docker/overlay2/07d7d0ab38cb08c95279e0706a65a6a4aeff8b5e72d559f8bc3dfcc0895654d6/diff:/var/lib/docker/overlay2/3a3d8346fb066863283ad39cf7f43615d7a1e99dc12aa7e8892d85d1c550bc63/diff:/var/lib/docker/overlay2/e5317b212815c832ebd2451ddd6e1f6922f350c5c1651ccfef7c5a8f34b7c1e3/diff:/var/lib/docker/overlay2/3bd7d98f0f0bf7ad2e29344a6ab4ca3211616e390da35077e76565c2074
df852/diff:/var/lib/docker/overlay2/ff63d2045cce1bb51b201079bba9d2b2a666b7331699b9407801b2fc00792bb3/diff:/var/lib/docker/overlay2/545bdf3f77f20e3db68875eda0a8829facb605119a630956ab79323688a3562a/diff:/var/lib/docker/overlay2/8f9b40124ec5ada59184358938542e028ca1e234ef1f6bce1816becf6ec8354e/diff:/var/lib/docker/overlay2/71b919cd2ece4207294cb577adab54fdba87cd9f7fdca1a53d6f376f87f11151/diff:/var/lib/docker/overlay2/b2d4a34fe28bd395d9b4ba0120173fc9df90c36a8a60a309ec088eca8930768c/diff:/var/lib/docker/overlay2/e986f180cbc34391c1ea6665283859b4c3c3061156ea1ff7196ffd869741e591/diff:/var/lib/docker/overlay2/cb98df203d42ea558f11fe84300b687cb65e6f3401b377a4466c9c8ef276ec59/diff:/var/lib/docker/overlay2/6371f9c0528fb1fb4a17bfcf3a34877fdd6d43dca1b5f06aff998d43d2010c46/diff:/var/lib/docker/overlay2/79ceef8e89c4094715c05bafb8c1427d09fb4a99f71f1a6186299303af352c98/diff:/var/lib/docker/overlay2/e034e76eeaff4e68d5158ffe54b55d122124ad82998be242fa08bec3010a0e64/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f15eeb93498eb9b562fa0404f6a7af49ce3b2ca697ba1b4c09698c8f0f5924f2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f15eeb93498eb9b562fa0404f6a7af49ce3b2ca697ba1b4c09698c8f0f5924f2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f15eeb93498eb9b562fa0404f6a7af49ce3b2ca697ba1b4c09698c8f0f5924f2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220725131610-44543",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220725131610-44543/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220725131610-44543",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220725131610-44543",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220725131610-44543",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "12379afd279980558be4a929c4d984061b0c32714de70eee2ad7f4ba5fd48183",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58398"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58399"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58400"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58401"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58397"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/12379afd2799",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220725131610-44543": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6935d4927a39",
	                        "old-k8s-version-20220725131610-44543"
	                    ],
	                    "NetworkID": "c2f2901f9a0d93fa66499c6332491a576318c2a7c67d4d75046d6eea022d9aab",
	                    "EndpointID": "4f78a7ab90e916cef937892224931fa1acc2f254ce858898e285125ccdeb75fc",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220725131610-44543 -n old-k8s-version-20220725131610-44543
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220725131610-44543 -n old-k8s-version-20220725131610-44543: exit status 6 (449.903542ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 13:20:21.909812   60050 status.go:413] kubeconfig endpoint: extract IP: "old-k8s-version-20220725131610-44543" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-20220725131610-44543" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (1.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (89.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-20220725131610-44543 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0725 13:20:27.376687   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/enable-default-cni-20220725125922-44543/client.crt: no such file or directory
E0725 13:20:27.381855   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/enable-default-cni-20220725125922-44543/client.crt: no such file or directory
E0725 13:20:27.393368   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/enable-default-cni-20220725125922-44543/client.crt: no such file or directory
E0725 13:20:27.415506   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/enable-default-cni-20220725125922-44543/client.crt: no such file or directory
E0725 13:20:27.456347   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/enable-default-cni-20220725125922-44543/client.crt: no such file or directory
E0725 13:20:27.537330   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/enable-default-cni-20220725125922-44543/client.crt: no such file or directory
E0725 13:20:27.698555   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/enable-default-cni-20220725125922-44543/client.crt: no such file or directory
E0725 13:20:28.020327   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/enable-default-cni-20220725125922-44543/client.crt: no such file or directory
E0725 13:20:28.660488   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/enable-default-cni-20220725125922-44543/client.crt: no such file or directory
E0725 13:20:28.994516   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/auto-20220725125922-44543/client.crt: no such file or directory
E0725 13:20:29.366263   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/bridge-20220725125922-44543/client.crt: no such file or directory
E0725 13:20:29.371385   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/bridge-20220725125922-44543/client.crt: no such file or directory
E0725 13:20:29.381571   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/bridge-20220725125922-44543/client.crt: no such file or directory
E0725 13:20:29.401828   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/bridge-20220725125922-44543/client.crt: no such file or directory
E0725 13:20:29.442004   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/bridge-20220725125922-44543/client.crt: no such file or directory
E0725 13:20:29.522177   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/bridge-20220725125922-44543/client.crt: no such file or directory
E0725 13:20:29.682459   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/bridge-20220725125922-44543/client.crt: no such file or directory
E0725 13:20:29.942601   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/enable-default-cni-20220725125922-44543/client.crt: no such file or directory
E0725 13:20:30.003219   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/bridge-20220725125922-44543/client.crt: no such file or directory
E0725 13:20:30.643599   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/bridge-20220725125922-44543/client.crt: no such file or directory
E0725 13:20:31.925900   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/bridge-20220725125922-44543/client.crt: no such file or directory
E0725 13:20:32.349831   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/calico-20220725125923-44543/client.crt: no such file or directory
E0725 13:20:32.504077   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/enable-default-cni-20220725125922-44543/client.crt: no such file or directory
E0725 13:20:34.488225   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/bridge-20220725125922-44543/client.crt: no such file or directory
E0725 13:20:37.625741   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/enable-default-cni-20220725125922-44543/client.crt: no such file or directory
E0725 13:20:39.610716   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/bridge-20220725125922-44543/client.crt: no such file or directory
E0725 13:20:42.553319   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/false-20220725125922-44543/client.crt: no such file or directory
E0725 13:20:47.866304   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/enable-default-cni-20220725125922-44543/client.crt: no such file or directory
E0725 13:20:49.851469   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/bridge-20220725125922-44543/client.crt: no such file or directory
E0725 13:20:56.495990   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/cilium-20220725125923-44543/client.crt: no such file or directory
E0725 13:20:56.710701   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/auto-20220725125922-44543/client.crt: no such file or directory
E0725 13:21:08.348539   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/enable-default-cni-20220725125922-44543/client.crt: no such file or directory
E0725 13:21:10.333318   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/bridge-20220725125922-44543/client.crt: no such file or directory
E0725 13:21:30.903918   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubenet-20220725125922-44543/client.crt: no such file or directory
E0725 13:21:30.909025   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubenet-20220725125922-44543/client.crt: no such file or directory
E0725 13:21:30.920835   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubenet-20220725125922-44543/client.crt: no such file or directory
E0725 13:21:30.941159   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubenet-20220725125922-44543/client.crt: no such file or directory
E0725 13:21:30.982279   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubenet-20220725125922-44543/client.crt: no such file or directory
E0725 13:21:31.062408   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubenet-20220725125922-44543/client.crt: no such file or directory
E0725 13:21:31.222692   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubenet-20220725125922-44543/client.crt: no such file or directory
E0725 13:21:31.542969   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubenet-20220725125922-44543/client.crt: no such file or directory
E0725 13:21:32.184319   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubenet-20220725125922-44543/client.crt: no such file or directory
E0725 13:21:33.465631   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubenet-20220725125922-44543/client.crt: no such file or directory
E0725 13:21:36.026224   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubenet-20220725125922-44543/client.crt: no such file or directory
E0725 13:21:36.948797   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kindnet-20220725125922-44543/client.crt: no such file or directory
E0725 13:21:38.186193   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/functional-20220725122408-44543/client.crt: no such file or directory
E0725 13:21:41.147115   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubenet-20220725125922-44543/client.crt: no such file or directory
E0725 13:21:44.457783   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/addons-20220725121918-44543/client.crt: no such file or directory
E0725 13:21:49.310059   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/enable-default-cni-20220725125922-44543/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-20220725131610-44543 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m29.18396152s)

                                                
                                                
-- stdout --
	  - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/metrics-apiservice.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-deployment.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-service.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	]
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-20220725131610-44543 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-20220725131610-44543 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220725131610-44543 describe deploy/metrics-server -n kube-system: exit status 1 (29.525931ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-20220725131610-44543" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-20220725131610-44543 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220725131610-44543
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220725131610-44543:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6935d4927a39f4ec4845798aff773e95bc8ae838958e9958b965359cf4e99f8c",
	        "Created": "2022-07-25T20:16:17.246440867Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 230890,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-25T20:16:17.551184518Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/6935d4927a39f4ec4845798aff773e95bc8ae838958e9958b965359cf4e99f8c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6935d4927a39f4ec4845798aff773e95bc8ae838958e9958b965359cf4e99f8c/hostname",
	        "HostsPath": "/var/lib/docker/containers/6935d4927a39f4ec4845798aff773e95bc8ae838958e9958b965359cf4e99f8c/hosts",
	        "LogPath": "/var/lib/docker/containers/6935d4927a39f4ec4845798aff773e95bc8ae838958e9958b965359cf4e99f8c/6935d4927a39f4ec4845798aff773e95bc8ae838958e9958b965359cf4e99f8c-json.log",
	        "Name": "/old-k8s-version-20220725131610-44543",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220725131610-44543:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220725131610-44543",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f15eeb93498eb9b562fa0404f6a7af49ce3b2ca697ba1b4c09698c8f0f5924f2-init/diff:/var/lib/docker/overlay2/d54b85ebfe416686b2558e2807bd5eef62d8c3964c35bbc22b4965093a7c57f4/diff:/var/lib/docker/overlay2/5cb3ff6e7921802d23fd6cba2490df2c36857ad3d97eedcf6e537d1527a37f60/diff:/var/lib/docker/overlay2/e9074b1b2294cc5a00d3237646a71c4d30d89748eadbae26dd532d6a475c9851/diff:/var/lib/docker/overlay2/df8929c4859ecf2c8807040b0174a3f0dfa8da01532ea961cd7f4f2f0ccfbfc2/diff:/var/lib/docker/overlay2/177329168f9a1043ce94a2b7b32958779e8f685e85abf8e4c335d03d23295b26/diff:/var/lib/docker/overlay2/9eef19133d2d5e0111523234481658c1f73d7552db59842ad9ca687debda8fb3/diff:/var/lib/docker/overlay2/5c3a4cdf915d1e65e3003feefb9b9e649b432658e093e04971d5cb79e610dba9/diff:/var/lib/docker/overlay2/d1aeff9fecf983db994270b4acf2a2310c790c64d42c58d436aec3238f88b36c/diff:/var/lib/docker/overlay2/580dc19745fb3c1352983857ac5a555954893045702e7867e651e15a2d6d8732/diff:/var/lib/docker/overlay2/6c77b3
2028ad3ef74c2faf89b5080529c0391907ee9c1d7bbc00c25a5340238b/diff:/var/lib/docker/overlay2/e9e20374d05015c7a4f19eb8e14e663122cd2aef66d761bedfdef138551230b8/diff:/var/lib/docker/overlay2/cd6d64933d189089a7aaf08d7c7db50b89cc981d03b8272e4fdedb65495c3e4c/diff:/var/lib/docker/overlay2/768244a14ae888bb2a08747182035c24bd2a6a7a67f1ddae7b4e60efccb01339/diff:/var/lib/docker/overlay2/f90a95060678580a1a90ee9a563996249f7606bffcd06b680ef15234b56ab4a6/diff:/var/lib/docker/overlay2/49310159b9be5ac9bfa1dca18d816535ceb12e228963adafcf71f9d7d52d95ec/diff:/var/lib/docker/overlay2/cef5c40c08cddf01fe5ea252f4a28586c0bd1296ddc38fc37fc7e34af83c3c30/diff:/var/lib/docker/overlay2/b31bdcb9b1a69c5230400a4e1f5650c1cc8f9e27e673ff43708f9a450106733e/diff:/var/lib/docker/overlay2/899516301b3ffc21aee7cb9984f97d944fe5af0f2cd0bc5a5895bf187e4bff72/diff:/var/lib/docker/overlay2/f1bd0d2fc2ce458c0b6364dde56610c3d26bf8dc2cca2ae4e34a46f173f44595/diff:/var/lib/docker/overlay2/9f0da14f9c19d5a719554996b34f3ef80a0378921181aaf89ca41339ccc5bee3/diff:/var/lib/d
ocker/overlay2/4d6e8c40f7c65cbcfa827b62273eb37d2a0bb0407ee161b80523f11c157a52d4/diff:/var/lib/docker/overlay2/434355bebe182fb5c97b07ad8cf7a59b5474f8bc71379ff4f9690ffe31837e63/diff:/var/lib/docker/overlay2/128fb199261eb4fea9e79111861223ddb0d84438454cb7c0b9a61cc73d0796c3/diff:/var/lib/docker/overlay2/11c9cb7e5f15ac43115cb739da1135a53e08dc94ee1da27441e1d6c79eb73e62/diff:/var/lib/docker/overlay2/5e6bf225209b025da3db5a2d8862c3a0ee64417fa809d026e010644fc6619b48/diff:/var/lib/docker/overlay2/265a2d12f86abb8a607a06ec6f225186dfe622f7e23cbbf146a2ceffc7bc9bdc/diff:/var/lib/docker/overlay2/e9140b8aea381db0faf833cd2d12ff40267febe63f7c6bc2fa27384dbadf89bc/diff:/var/lib/docker/overlay2/07d7d0ab38cb08c95279e0706a65a6a4aeff8b5e72d559f8bc3dfcc0895654d6/diff:/var/lib/docker/overlay2/3a3d8346fb066863283ad39cf7f43615d7a1e99dc12aa7e8892d85d1c550bc63/diff:/var/lib/docker/overlay2/e5317b212815c832ebd2451ddd6e1f6922f350c5c1651ccfef7c5a8f34b7c1e3/diff:/var/lib/docker/overlay2/3bd7d98f0f0bf7ad2e29344a6ab4ca3211616e390da35077e76565c2074
df852/diff:/var/lib/docker/overlay2/ff63d2045cce1bb51b201079bba9d2b2a666b7331699b9407801b2fc00792bb3/diff:/var/lib/docker/overlay2/545bdf3f77f20e3db68875eda0a8829facb605119a630956ab79323688a3562a/diff:/var/lib/docker/overlay2/8f9b40124ec5ada59184358938542e028ca1e234ef1f6bce1816becf6ec8354e/diff:/var/lib/docker/overlay2/71b919cd2ece4207294cb577adab54fdba87cd9f7fdca1a53d6f376f87f11151/diff:/var/lib/docker/overlay2/b2d4a34fe28bd395d9b4ba0120173fc9df90c36a8a60a309ec088eca8930768c/diff:/var/lib/docker/overlay2/e986f180cbc34391c1ea6665283859b4c3c3061156ea1ff7196ffd869741e591/diff:/var/lib/docker/overlay2/cb98df203d42ea558f11fe84300b687cb65e6f3401b377a4466c9c8ef276ec59/diff:/var/lib/docker/overlay2/6371f9c0528fb1fb4a17bfcf3a34877fdd6d43dca1b5f06aff998d43d2010c46/diff:/var/lib/docker/overlay2/79ceef8e89c4094715c05bafb8c1427d09fb4a99f71f1a6186299303af352c98/diff:/var/lib/docker/overlay2/e034e76eeaff4e68d5158ffe54b55d122124ad82998be242fa08bec3010a0e64/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f15eeb93498eb9b562fa0404f6a7af49ce3b2ca697ba1b4c09698c8f0f5924f2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f15eeb93498eb9b562fa0404f6a7af49ce3b2ca697ba1b4c09698c8f0f5924f2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f15eeb93498eb9b562fa0404f6a7af49ce3b2ca697ba1b4c09698c8f0f5924f2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220725131610-44543",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220725131610-44543/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220725131610-44543",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220725131610-44543",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220725131610-44543",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "12379afd279980558be4a929c4d984061b0c32714de70eee2ad7f4ba5fd48183",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58398"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58399"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58400"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58401"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58397"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/12379afd2799",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220725131610-44543": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6935d4927a39",
	                        "old-k8s-version-20220725131610-44543"
	                    ],
	                    "NetworkID": "c2f2901f9a0d93fa66499c6332491a576318c2a7c67d4d75046d6eea022d9aab",
	                    "EndpointID": "4f78a7ab90e916cef937892224931fa1acc2f254ce858898e285125ccdeb75fc",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220725131610-44543 -n old-k8s-version-20220725131610-44543
E0725 13:21:51.295074   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/bridge-20220725125922-44543/client.crt: no such file or directory
E0725 13:21:51.389311   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubenet-20220725125922-44543/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220725131610-44543 -n old-k8s-version-20220725131610-44543: exit status 6 (444.859134ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 13:21:51.649573   60155 status.go:413] kubeconfig endpoint: extract IP: "old-k8s-version-20220725131610-44543" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-20220725131610-44543" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (89.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (491.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-20220725131610-44543 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0725 13:21:54.273930   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/calico-20220725125923-44543/client.crt: no such file or directory
E0725 13:22:04.476140   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/false-20220725125922-44543/client.crt: no such file or directory
E0725 13:22:04.636326   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kindnet-20220725125922-44543/client.crt: no such file or directory
E0725 13:22:11.870317   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubenet-20220725125922-44543/client.crt: no such file or directory
E0725 13:22:52.833789   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubenet-20220725125922-44543/client.crt: no such file or directory
E0725 13:23:11.233522   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/enable-default-cni-20220725125922-44543/client.crt: no such file or directory
E0725 13:23:12.657187   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/cilium-20220725125923-44543/client.crt: no such file or directory
E0725 13:23:13.217590   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/bridge-20220725125922-44543/client.crt: no such file or directory
E0725 13:23:40.342525   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/cilium-20220725125923-44543/client.crt: no such file or directory
E0725 13:23:56.034868   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/skaffold-20220725125803-44543/client.crt: no such file or directory
E0725 13:24:10.428212   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/calico-20220725125923-44543/client.crt: no such file or directory
E0725 13:24:14.756642   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubenet-20220725125922-44543/client.crt: no such file or directory
E0725 13:24:20.632381   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/false-20220725125922-44543/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-20220725131610-44543 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (8m6.774041306s)

                                                
                                                
-- stdout --
	* [old-k8s-version-20220725131610-44543] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14555
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube
	* Kubernetes 1.24.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.24.2
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-20220725131610-44543 in cluster old-k8s-version-20220725131610-44543
	* Pulling base image ...
	* Restarting existing docker container for "old-k8s-version-20220725131610-44543" ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.17 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 13:21:53.673919   60183 out.go:296] Setting OutFile to fd 1 ...
	I0725 13:21:53.674091   60183 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 13:21:53.674097   60183 out.go:309] Setting ErrFile to fd 2...
	I0725 13:21:53.674101   60183 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 13:21:53.674202   60183 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/bin
	I0725 13:21:53.674680   60183 out.go:303] Setting JSON to false
	I0725 13:21:53.690728   60183 start.go:115] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":15685,"bootTime":1658764828,"procs":352,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0725 13:21:53.690811   60183 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0725 13:21:53.712538   60183 out.go:177] * [old-k8s-version-20220725131610-44543] minikube v1.26.0 on Darwin 12.4
	I0725 13:21:53.734468   60183 notify.go:193] Checking for updates...
	I0725 13:21:53.755405   60183 out.go:177]   - MINIKUBE_LOCATION=14555
	I0725 13:21:53.777462   60183 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	I0725 13:21:53.798424   60183 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 13:21:53.819416   60183 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 13:21:53.840488   60183 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube
	I0725 13:21:53.862141   60183 config.go:178] Loaded profile config "old-k8s-version-20220725131610-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0725 13:21:53.884290   60183 out.go:177] * Kubernetes 1.24.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.24.2
	I0725 13:21:53.905392   60183 driver.go:365] Setting default libvirt URI to qemu:///system
	I0725 13:21:53.973956   60183 docker.go:137] docker version: linux-20.10.17
	I0725 13:21:53.974120   60183 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 13:21:54.106665   60183 info.go:265] docker info: {ID:DEPZ:X5EQ:JUVI:7TAY:IXPP:M3QT:KGJK:NK3N:YSWM:HAHS:YLUF:RBE6 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 20:21:54.051064083 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 13:21:54.128839   60183 out.go:177] * Using the docker driver based on existing profile
	I0725 13:21:54.150256   60183 start.go:284] selected driver: docker
	I0725 13:21:54.150312   60183 start.go:808] validating driver "docker" against &{Name:old-k8s-version-20220725131610-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220725131610-44543 N
amespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: M
ultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 13:21:54.150444   60183 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 13:21:54.153661   60183 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 13:21:54.288038   60183 info.go:265] docker info: {ID:DEPZ:X5EQ:JUVI:7TAY:IXPP:M3QT:KGJK:NK3N:YSWM:HAHS:YLUF:RBE6 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 20:21:54.230541816 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 13:21:54.288195   60183 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 13:21:54.288211   60183 cni.go:95] Creating CNI manager for ""
	I0725 13:21:54.288221   60183 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 13:21:54.288229   60183 start_flags.go:310] config:
	{Name:old-k8s-version-20220725131610-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220725131610-44543 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mount
String:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 13:21:54.310268   60183 out.go:177] * Starting control plane node old-k8s-version-20220725131610-44543 in cluster old-k8s-version-20220725131610-44543
	I0725 13:21:54.332068   60183 cache.go:120] Beginning downloading kic base image for docker with docker
	I0725 13:21:54.353929   60183 out.go:177] * Pulling base image ...
	I0725 13:21:54.396171   60183 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0725 13:21:54.396230   60183 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0725 13:21:54.396268   60183 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0725 13:21:54.396303   60183 cache.go:57] Caching tarball of preloaded images
	I0725 13:21:54.396533   60183 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0725 13:21:54.396569   60183 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0725 13:21:54.397710   60183 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/config.json ...
	I0725 13:21:54.461117   60183 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0725 13:21:54.461134   60183 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0725 13:21:54.461150   60183 cache.go:208] Successfully downloaded all kic artifacts
	I0725 13:21:54.461221   60183 start.go:370] acquiring machines lock for old-k8s-version-20220725131610-44543: {Name:mka786150aa94c7510878ab5519b8cf30abe9378 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 13:21:54.461319   60183 start.go:374] acquired machines lock for "old-k8s-version-20220725131610-44543" in 74.735µs
	I0725 13:21:54.461339   60183 start.go:95] Skipping create...Using existing machine configuration
	I0725 13:21:54.461349   60183 fix.go:55] fixHost starting: 
	I0725 13:21:54.461599   60183 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220725131610-44543 --format={{.State.Status}}
	I0725 13:21:54.529917   60183 fix.go:103] recreateIfNeeded on old-k8s-version-20220725131610-44543: state=Stopped err=<nil>
	W0725 13:21:54.529947   60183 fix.go:129] unexpected machine state, will restart: <nil>
	I0725 13:21:54.573533   60183 out.go:177] * Restarting existing docker container for "old-k8s-version-20220725131610-44543" ...
	I0725 13:21:54.594675   60183 cli_runner.go:164] Run: docker start old-k8s-version-20220725131610-44543
	I0725 13:21:54.964125   60183 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220725131610-44543 --format={{.State.Status}}
	I0725 13:21:55.037820   60183 kic.go:415] container "old-k8s-version-20220725131610-44543" state is running.
	I0725 13:21:55.038433   60183 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220725131610-44543
	I0725 13:21:55.113560   60183 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/config.json ...
	I0725 13:21:55.114030   60183 machine.go:88] provisioning docker machine ...
	I0725 13:21:55.114068   60183 ubuntu.go:169] provisioning hostname "old-k8s-version-20220725131610-44543"
	I0725 13:21:55.114171   60183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725131610-44543
	I0725 13:21:55.190035   60183 main.go:134] libmachine: Using SSH client type: native
	I0725 13:21:55.190239   60183 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 58933 <nil> <nil>}
	I0725 13:21:55.190254   60183 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220725131610-44543 && echo "old-k8s-version-20220725131610-44543" | sudo tee /etc/hostname
	I0725 13:21:55.319366   60183 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220725131610-44543
	
	I0725 13:21:55.319439   60183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725131610-44543
	I0725 13:21:55.392552   60183 main.go:134] libmachine: Using SSH client type: native
	I0725 13:21:55.392712   60183 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 58933 <nil> <nil>}
	I0725 13:21:55.392732   60183 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220725131610-44543' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220725131610-44543/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220725131610-44543' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 13:21:55.513463   60183 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 13:21:55.513485   60183 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube}
	I0725 13:21:55.513514   60183 ubuntu.go:177] setting up certificates
	I0725 13:21:55.513524   60183 provision.go:83] configureAuth start
	I0725 13:21:55.513588   60183 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220725131610-44543
	I0725 13:21:55.584163   60183 provision.go:138] copyHostCerts
	I0725 13:21:55.584244   60183 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem, removing ...
	I0725 13:21:55.584253   60183 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem
	I0725 13:21:55.584354   60183 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem (1082 bytes)
	I0725 13:21:55.584593   60183 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem, removing ...
	I0725 13:21:55.584602   60183 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem
	I0725 13:21:55.584658   60183 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem (1123 bytes)
	I0725 13:21:55.584799   60183 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem, removing ...
	I0725 13:21:55.584805   60183 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem
	I0725 13:21:55.584862   60183 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem (1675 bytes)
	I0725 13:21:55.584974   60183 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220725131610-44543 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220725131610-44543]
	I0725 13:21:55.687712   60183 provision.go:172] copyRemoteCerts
	I0725 13:21:55.687798   60183 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 13:21:55.687857   60183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725131610-44543
	I0725 13:21:55.758975   60183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58933 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/old-k8s-version-20220725131610-44543/id_rsa Username:docker}
	I0725 13:21:55.843244   60183 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0725 13:21:55.859895   60183 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
	I0725 13:21:55.876505   60183 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 13:21:55.893602   60183 provision.go:86] duration metric: configureAuth took 380.052293ms
	I0725 13:21:55.893616   60183 ubuntu.go:193] setting minikube options for container-runtime
	I0725 13:21:55.893756   60183 config.go:178] Loaded profile config "old-k8s-version-20220725131610-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0725 13:21:55.893807   60183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725131610-44543
	I0725 13:21:55.964720   60183 main.go:134] libmachine: Using SSH client type: native
	I0725 13:21:55.964908   60183 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 58933 <nil> <nil>}
	I0725 13:21:55.964920   60183 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 13:21:56.084753   60183 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0725 13:21:56.084769   60183 ubuntu.go:71] root file system type: overlay
	I0725 13:21:56.084915   60183 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 13:21:56.084981   60183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725131610-44543
	I0725 13:21:56.155842   60183 main.go:134] libmachine: Using SSH client type: native
	I0725 13:21:56.155981   60183 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 58933 <nil> <nil>}
	I0725 13:21:56.156032   60183 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 13:21:56.286190   60183 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 13:21:56.286275   60183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725131610-44543
	I0725 13:21:56.357571   60183 main.go:134] libmachine: Using SSH client type: native
	I0725 13:21:56.357744   60183 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 58933 <nil> <nil>}
	I0725 13:21:56.357760   60183 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 13:21:56.482497   60183 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 13:21:56.482513   60183 machine.go:91] provisioned docker machine in 1.368435196s
	I0725 13:21:56.482522   60183 start.go:307] post-start starting for "old-k8s-version-20220725131610-44543" (driver="docker")
	I0725 13:21:56.482527   60183 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 13:21:56.482601   60183 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 13:21:56.482652   60183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725131610-44543
	I0725 13:21:56.554006   60183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58933 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/old-k8s-version-20220725131610-44543/id_rsa Username:docker}
	I0725 13:21:56.642412   60183 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 13:21:56.645967   60183 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0725 13:21:56.645982   60183 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0725 13:21:56.645989   60183 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0725 13:21:56.645993   60183 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0725 13:21:56.646005   60183 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/addons for local assets ...
	I0725 13:21:56.646118   60183 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files for local assets ...
	I0725 13:21:56.646284   60183 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem -> 445432.pem in /etc/ssl/certs
	I0725 13:21:56.646439   60183 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 13:21:56.653543   60183 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem --> /etc/ssl/certs/445432.pem (1708 bytes)
	I0725 13:21:56.673151   60183 start.go:310] post-start completed in 190.601782ms
	I0725 13:21:56.673236   60183 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 13:21:56.673292   60183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725131610-44543
	I0725 13:21:56.745535   60183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58933 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/old-k8s-version-20220725131610-44543/id_rsa Username:docker}
	I0725 13:21:56.830784   60183 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0725 13:21:56.836577   60183 fix.go:57] fixHost completed within 2.375156628s
	I0725 13:21:56.836597   60183 start.go:82] releasing machines lock for "old-k8s-version-20220725131610-44543", held for 2.375196554s
	I0725 13:21:56.836691   60183 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220725131610-44543
	I0725 13:21:56.908406   60183 ssh_runner.go:195] Run: systemctl --version
	I0725 13:21:56.908410   60183 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0725 13:21:56.908468   60183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725131610-44543
	I0725 13:21:56.908476   60183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725131610-44543
	I0725 13:21:56.984091   60183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58933 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/old-k8s-version-20220725131610-44543/id_rsa Username:docker}
	I0725 13:21:56.985901   60183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58933 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/old-k8s-version-20220725131610-44543/id_rsa Username:docker}
	I0725 13:21:57.198212   60183 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 13:21:57.207890   60183 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0725 13:21:57.207956   60183 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 13:21:57.219448   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 13:21:57.232370   60183 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 13:21:57.302875   60183 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 13:21:57.376726   60183 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 13:21:57.442738   60183 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 13:21:57.646325   60183 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 13:21:57.685082   60183 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 13:21:57.778355   60183 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.17 ...
	I0725 13:21:57.778528   60183 cli_runner.go:164] Run: docker exec -t old-k8s-version-20220725131610-44543 dig +short host.docker.internal
	I0725 13:21:57.907625   60183 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0725 13:21:57.907747   60183 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0725 13:21:57.911756   60183 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 13:21:57.921003   60183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220725131610-44543
	I0725 13:21:57.991786   60183 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0725 13:21:57.991860   60183 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 13:21:58.022698   60183 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0725 13:21:58.022711   60183 docker.go:542] Images already preloaded, skipping extraction
	I0725 13:21:58.022798   60183 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 13:21:58.052074   60183 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0725 13:21:58.052091   60183 cache_images.go:84] Images are preloaded, skipping loading
	I0725 13:21:58.052214   60183 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0725 13:21:58.125974   60183 cni.go:95] Creating CNI manager for ""
	I0725 13:21:58.125987   60183 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 13:21:58.126001   60183 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0725 13:21:58.126035   60183 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220725131610-44543 NodeName:old-k8s-version-20220725131610-44543 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:systemd Clien
tCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0725 13:21:58.126181   60183 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-20220725131610-44543"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20220725131610-44543
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 13:21:58.126269   60183 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-20220725131610-44543 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220725131610-44543 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0725 13:21:58.126356   60183 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0725 13:21:58.134118   60183 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 13:21:58.134189   60183 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 13:21:58.141324   60183 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0725 13:21:58.154757   60183 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 13:21:58.167498   60183 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0725 13:21:58.179668   60183 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0725 13:21:58.183227   60183 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 13:21:58.192523   60183 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543 for IP: 192.168.67.2
	I0725 13:21:58.192631   60183 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.key
	I0725 13:21:58.192684   60183 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.key
	I0725 13:21:58.192765   60183 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/client.key
	I0725 13:21:58.192828   60183 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/apiserver.key.c7fa3a9e
	I0725 13:21:58.192872   60183 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/proxy-client.key
	I0725 13:21:58.193074   60183 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543.pem (1338 bytes)
	W0725 13:21:58.193119   60183 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543_empty.pem, impossibly tiny 0 bytes
	I0725 13:21:58.193132   60183 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 13:21:58.193167   60183 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem (1082 bytes)
	I0725 13:21:58.193202   60183 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem (1123 bytes)
	I0725 13:21:58.193229   60183 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem (1675 bytes)
	I0725 13:21:58.193300   60183 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem (1708 bytes)
	I0725 13:21:58.193838   60183 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0725 13:21:58.210321   60183 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0725 13:21:58.228970   60183 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 13:21:58.245718   60183 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 13:21:58.262421   60183 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 13:21:58.279214   60183 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 13:21:58.297844   60183 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 13:21:58.314779   60183 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0725 13:21:58.331511   60183 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543.pem --> /usr/share/ca-certificates/44543.pem (1338 bytes)
	I0725 13:21:58.348755   60183 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem --> /usr/share/ca-certificates/445432.pem (1708 bytes)
	I0725 13:21:58.365526   60183 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 13:21:58.382721   60183 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 13:21:58.395675   60183 ssh_runner.go:195] Run: openssl version
	I0725 13:21:58.401635   60183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 13:21:58.409787   60183 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 13:21:58.413787   60183 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 25 19:19 /usr/share/ca-certificates/minikubeCA.pem
	I0725 13:21:58.413829   60183 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 13:21:58.419159   60183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 13:21:58.426230   60183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44543.pem && ln -fs /usr/share/ca-certificates/44543.pem /etc/ssl/certs/44543.pem"
	I0725 13:21:58.434193   60183 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44543.pem
	I0725 13:21:58.438053   60183 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 25 19:24 /usr/share/ca-certificates/44543.pem
	I0725 13:21:58.438096   60183 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44543.pem
	I0725 13:21:58.443183   60183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/44543.pem /etc/ssl/certs/51391683.0"
	I0725 13:21:58.450469   60183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/445432.pem && ln -fs /usr/share/ca-certificates/445432.pem /etc/ssl/certs/445432.pem"
	I0725 13:21:58.457925   60183 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/445432.pem
	I0725 13:21:58.461769   60183 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 25 19:24 /usr/share/ca-certificates/445432.pem
	I0725 13:21:58.461816   60183 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/445432.pem
	I0725 13:21:58.467074   60183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/445432.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 13:21:58.474326   60183 kubeadm.go:395] StartCluster: {Name:old-k8s-version-20220725131610-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220725131610-44543 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:fa
lse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 13:21:58.474425   60183 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 13:21:58.502814   60183 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 13:21:58.510458   60183 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0725 13:21:58.510472   60183 kubeadm.go:626] restartCluster start
	I0725 13:21:58.510516   60183 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 13:21:58.517042   60183 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:21:58.517101   60183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220725131610-44543
	I0725 13:21:58.590607   60183 kubeconfig.go:116] verify returned: extract IP: "old-k8s-version-20220725131610-44543" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	I0725 13:21:58.590795   60183 kubeconfig.go:127] "old-k8s-version-20220725131610-44543" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig - will repair!
	I0725 13:21:58.591098   60183 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig: {Name:mk33e723b045d7cbb2c524ed31a7e78a4a0b6415 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 13:21:58.592462   60183 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 13:21:58.600334   60183 api_server.go:165] Checking apiserver status ...
	I0725 13:21:58.600385   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:21:58.608459   60183 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:21:58.808842   60183 api_server.go:165] Checking apiserver status ...
	I0725 13:21:58.808962   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:21:58.817999   60183 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:21:59.008657   60183 api_server.go:165] Checking apiserver status ...
	I0725 13:21:59.008819   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:21:59.019192   60183 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:21:59.210602   60183 api_server.go:165] Checking apiserver status ...
	I0725 13:21:59.210815   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:21:59.221605   60183 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:21:59.408833   60183 api_server.go:165] Checking apiserver status ...
	I0725 13:21:59.408950   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:21:59.417472   60183 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:21:59.609619   60183 api_server.go:165] Checking apiserver status ...
	I0725 13:21:59.609820   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:21:59.621045   60183 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:21:59.809314   60183 api_server.go:165] Checking apiserver status ...
	I0725 13:21:59.809409   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:21:59.821368   60183 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:22:00.008728   60183 api_server.go:165] Checking apiserver status ...
	I0725 13:22:00.008894   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:22:00.018811   60183 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:22:00.208723   60183 api_server.go:165] Checking apiserver status ...
	I0725 13:22:00.208885   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:22:00.219444   60183 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:22:00.408638   60183 api_server.go:165] Checking apiserver status ...
	I0725 13:22:00.408732   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:22:00.417392   60183 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:22:00.610672   60183 api_server.go:165] Checking apiserver status ...
	I0725 13:22:00.610860   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:22:00.621365   60183 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:22:00.808746   60183 api_server.go:165] Checking apiserver status ...
	I0725 13:22:00.808881   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:22:00.818878   60183 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:22:01.009664   60183 api_server.go:165] Checking apiserver status ...
	I0725 13:22:01.009771   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:22:01.020457   60183 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:22:01.208785   60183 api_server.go:165] Checking apiserver status ...
	I0725 13:22:01.208891   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:22:01.217523   60183 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:22:01.409152   60183 api_server.go:165] Checking apiserver status ...
	I0725 13:22:01.409246   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:22:01.418133   60183 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:22:01.608696   60183 api_server.go:165] Checking apiserver status ...
	I0725 13:22:01.608826   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:22:01.618526   60183 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:22:01.618536   60183 api_server.go:165] Checking apiserver status ...
	I0725 13:22:01.618580   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:22:01.626858   60183 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:22:01.626869   60183 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0725 13:22:01.626876   60183 kubeadm.go:1092] stopping kube-system containers ...
	I0725 13:22:01.626930   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 13:22:01.657002   60183 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 13:22:01.667081   60183 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 13:22:01.674438   60183 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5747 Jul 25 20:18 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5783 Jul 25 20:18 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5935 Jul 25 20:18 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5731 Jul 25 20:18 /etc/kubernetes/scheduler.conf
	
	I0725 13:22:01.674489   60183 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 13:22:01.681528   60183 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 13:22:01.688711   60183 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 13:22:01.695801   60183 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 13:22:01.703394   60183 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 13:22:01.710791   60183 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0725 13:22:01.710802   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:22:01.761240   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:22:02.581237   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:22:02.790070   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:22:02.852549   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:22:02.904809   60183 api_server.go:51] waiting for apiserver process to appear ...
	I0725 13:22:02.904874   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:03.415372   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:03.913636   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:04.413713   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:04.913407   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:05.413354   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:05.913417   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:06.413486   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:06.915522   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:07.414044   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:07.915400   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:08.413594   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:08.915541   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:09.413617   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:09.914519   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:10.413482   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:10.915473   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:11.413695   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:11.913720   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:12.414308   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:12.914018   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:13.413606   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:13.913946   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:14.415576   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:14.915757   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:15.413747   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:15.913751   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:16.413958   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:16.913984   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:17.413766   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:17.915900   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:18.414053   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:18.915793   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:19.413913   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:19.914049   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:20.413988   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:20.914020   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:21.414013   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:21.914244   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:22.414678   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:22.915965   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:23.416018   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:23.913889   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:24.414309   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:24.914097   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:25.416085   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:25.914557   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:26.414004   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:26.913961   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:27.415609   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:27.915017   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:28.414405   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:28.914648   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:29.416017   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:29.914651   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:30.416213   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:30.914111   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:31.414680   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:31.914434   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:32.416332   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:32.914962   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:33.415016   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:33.914290   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:34.416347   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:34.915975   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:35.414989   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:35.914255   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:36.416340   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:36.914596   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:37.414585   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:37.914476   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:38.415904   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:38.914361   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:39.415270   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:39.914715   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:40.414871   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:40.915265   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:41.414455   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:41.915093   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:42.414441   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:42.914544   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:43.414430   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:43.914464   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:44.414683   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:44.915872   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:45.415033   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:45.916689   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:46.415363   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:46.914864   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:47.415651   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:47.914762   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:48.414887   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:48.914639   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:49.415136   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:49.914786   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:50.415401   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:50.916109   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:51.415002   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:51.915039   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:52.415197   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:52.916879   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:53.414799   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:53.916903   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:54.414955   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:54.915486   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:55.415041   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:55.916580   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:56.415003   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:56.915430   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:57.414998   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:57.916705   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:58.415210   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:58.914921   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:59.415400   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:59.915038   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:23:00.415912   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:23:00.915078   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:23:01.414978   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:23:01.915311   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:23:02.415864   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:23:02.915524   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:23:02.945401   60183 logs.go:274] 0 containers: []
	W0725 13:23:02.945413   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:23:02.945478   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:23:02.973644   60183 logs.go:274] 0 containers: []
	W0725 13:23:02.973657   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:23:02.973724   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:23:03.002721   60183 logs.go:274] 0 containers: []
	W0725 13:23:03.002734   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:23:03.002788   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:23:03.031519   60183 logs.go:274] 0 containers: []
	W0725 13:23:03.031535   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:23:03.031603   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:23:03.061426   60183 logs.go:274] 0 containers: []
	W0725 13:23:03.061439   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:23:03.061493   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:23:03.089574   60183 logs.go:274] 0 containers: []
	W0725 13:23:03.089587   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:23:03.089645   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:23:03.118793   60183 logs.go:274] 0 containers: []
	W0725 13:23:03.118804   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:23:03.118869   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:23:03.148187   60183 logs.go:274] 0 containers: []
	W0725 13:23:03.148199   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:23:03.148205   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:23:03.148211   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:23:03.189187   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:23:03.189204   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:23:03.200922   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:23:03.200939   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:23:03.253329   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:23:03.253345   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:23:03.253354   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:23:03.267096   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:23:03.267108   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:23:05.318288   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051108125s)
	I0725 13:23:07.820791   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:23:07.916670   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:23:07.946805   60183 logs.go:274] 0 containers: []
	W0725 13:23:07.946817   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:23:07.946877   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:23:07.976713   60183 logs.go:274] 0 containers: []
	W0725 13:23:07.976727   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:23:07.976787   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:23:08.008280   60183 logs.go:274] 0 containers: []
	W0725 13:23:08.008294   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:23:08.008368   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:23:08.039002   60183 logs.go:274] 0 containers: []
	W0725 13:23:08.039018   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:23:08.039079   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:23:08.068905   60183 logs.go:274] 0 containers: []
	W0725 13:23:08.068916   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:23:08.068975   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:23:08.097527   60183 logs.go:274] 0 containers: []
	W0725 13:23:08.097539   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:23:08.097606   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:23:08.125958   60183 logs.go:274] 0 containers: []
	W0725 13:23:08.125970   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:23:08.126034   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:23:08.154963   60183 logs.go:274] 0 containers: []
	W0725 13:23:08.154976   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:23:08.154983   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:23:08.154989   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:23:08.199198   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:23:08.199212   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:23:08.210469   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:23:08.210485   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:23:08.263518   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:23:08.263531   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:23:08.263538   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:23:08.277559   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:23:08.277572   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:23:10.326919   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.04927568s)
	I0725 13:23:12.827696   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:23:12.915338   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:23:12.944085   60183 logs.go:274] 0 containers: []
	W0725 13:23:12.944096   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:23:12.944151   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:23:12.974168   60183 logs.go:274] 0 containers: []
	W0725 13:23:12.974180   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:23:12.974244   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:23:13.002821   60183 logs.go:274] 0 containers: []
	W0725 13:23:13.002833   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:23:13.002887   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:23:13.031211   60183 logs.go:274] 0 containers: []
	W0725 13:23:13.031224   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:23:13.031281   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:23:13.060657   60183 logs.go:274] 0 containers: []
	W0725 13:23:13.060672   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:23:13.060728   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:23:13.089071   60183 logs.go:274] 0 containers: []
	W0725 13:23:13.089083   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:23:13.089145   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:23:13.118878   60183 logs.go:274] 0 containers: []
	W0725 13:23:13.118891   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:23:13.118949   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:23:13.147109   60183 logs.go:274] 0 containers: []
	W0725 13:23:13.147120   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:23:13.147149   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:23:13.147161   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:23:13.159243   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:23:13.159254   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:23:13.212182   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:23:13.212193   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:23:13.212202   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:23:13.227312   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:23:13.227327   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:23:15.282546   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.0551466s)
	I0725 13:23:15.282653   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:23:15.282659   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:23:17.824516   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:23:17.916271   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:23:17.946818   60183 logs.go:274] 0 containers: []
	W0725 13:23:17.946831   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:23:17.946889   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:23:17.975561   60183 logs.go:274] 0 containers: []
	W0725 13:23:17.975573   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:23:17.975634   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:23:18.004924   60183 logs.go:274] 0 containers: []
	W0725 13:23:18.004936   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:23:18.004998   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:23:18.033904   60183 logs.go:274] 0 containers: []
	W0725 13:23:18.033916   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:23:18.033972   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:23:18.063640   60183 logs.go:274] 0 containers: []
	W0725 13:23:18.063653   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:23:18.063713   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:23:18.091848   60183 logs.go:274] 0 containers: []
	W0725 13:23:18.091864   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:23:18.091918   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:23:18.120698   60183 logs.go:274] 0 containers: []
	W0725 13:23:18.120710   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:23:18.120772   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:23:18.150302   60183 logs.go:274] 0 containers: []
	W0725 13:23:18.150314   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:23:18.150321   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:23:18.150328   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:23:18.189307   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:23:18.189321   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:23:18.201238   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:23:18.201251   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:23:18.257070   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:23:18.257081   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:23:18.257091   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:23:18.271090   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:23:18.271102   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:23:20.325947   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054774189s)
	I0725 13:23:22.827182   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:23:22.916614   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:23:22.947019   60183 logs.go:274] 0 containers: []
	W0725 13:23:22.947032   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:23:22.947094   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:23:22.976102   60183 logs.go:274] 0 containers: []
	W0725 13:23:22.976115   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:23:22.976175   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:23:23.005390   60183 logs.go:274] 0 containers: []
	W0725 13:23:23.005405   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:23:23.005472   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:23:23.036043   60183 logs.go:274] 0 containers: []
	W0725 13:23:23.036058   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:23:23.036113   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:23:23.065291   60183 logs.go:274] 0 containers: []
	W0725 13:23:23.065303   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:23:23.065362   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:23:23.094601   60183 logs.go:274] 0 containers: []
	W0725 13:23:23.094612   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:23:23.094677   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:23:23.124130   60183 logs.go:274] 0 containers: []
	W0725 13:23:23.124142   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:23:23.124197   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:23:23.152885   60183 logs.go:274] 0 containers: []
	W0725 13:23:23.152898   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:23:23.152906   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:23:23.152915   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:23:23.207267   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:23:23.207277   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:23:23.207303   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:23:23.220621   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:23:23.220633   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:23:25.277761   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057055425s)
	I0725 13:23:25.277872   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:23:25.277880   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:23:25.317120   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:23:25.317134   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:23:27.830407   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:23:27.916091   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:23:27.947876   60183 logs.go:274] 0 containers: []
	W0725 13:23:27.947889   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:23:27.947943   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:23:27.976656   60183 logs.go:274] 0 containers: []
	W0725 13:23:27.976668   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:23:27.976726   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:23:28.005656   60183 logs.go:274] 0 containers: []
	W0725 13:23:28.005669   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:23:28.005726   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:23:28.035060   60183 logs.go:274] 0 containers: []
	W0725 13:23:28.035072   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:23:28.035132   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:23:28.063371   60183 logs.go:274] 0 containers: []
	W0725 13:23:28.063395   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:23:28.063456   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:23:28.093066   60183 logs.go:274] 0 containers: []
	W0725 13:23:28.093078   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:23:28.093142   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:23:28.121760   60183 logs.go:274] 0 containers: []
	W0725 13:23:28.121773   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:23:28.121829   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:23:28.150873   60183 logs.go:274] 0 containers: []
	W0725 13:23:28.150885   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:23:28.150891   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:23:28.150901   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:23:28.166253   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:23:28.166265   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:23:30.219274   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052936893s)
	I0725 13:23:30.219386   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:23:30.219393   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:23:30.259179   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:23:30.259192   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:23:30.270501   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:23:30.270513   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:23:30.323106   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:23:32.825418   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:23:32.915954   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:23:32.946505   60183 logs.go:274] 0 containers: []
	W0725 13:23:32.946517   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:23:32.946580   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:23:32.976363   60183 logs.go:274] 0 containers: []
	W0725 13:23:32.976376   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:23:32.976442   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:23:33.004925   60183 logs.go:274] 0 containers: []
	W0725 13:23:33.004938   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:23:33.004996   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:23:33.034716   60183 logs.go:274] 0 containers: []
	W0725 13:23:33.034728   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:23:33.034788   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:23:33.062554   60183 logs.go:274] 0 containers: []
	W0725 13:23:33.062566   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:23:33.062623   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:23:33.091734   60183 logs.go:274] 0 containers: []
	W0725 13:23:33.091746   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:23:33.091805   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:23:33.120846   60183 logs.go:274] 0 containers: []
	W0725 13:23:33.120858   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:23:33.120924   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:23:33.149461   60183 logs.go:274] 0 containers: []
	W0725 13:23:33.149474   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:23:33.149481   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:23:33.149492   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:23:33.188609   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:23:33.188621   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:23:33.200250   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:23:33.200263   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:23:33.252688   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:23:33.252701   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:23:33.252711   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:23:33.266791   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:23:33.266803   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:23:35.325253   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05837666s)
	I0725 13:23:37.827729   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:23:37.918114   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:23:37.948670   60183 logs.go:274] 0 containers: []
	W0725 13:23:37.948682   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:23:37.948740   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:23:37.978509   60183 logs.go:274] 0 containers: []
	W0725 13:23:37.978521   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:23:37.978606   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:23:38.008790   60183 logs.go:274] 0 containers: []
	W0725 13:23:38.008805   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:23:38.008873   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:23:38.037601   60183 logs.go:274] 0 containers: []
	W0725 13:23:38.037614   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:23:38.037674   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:23:38.066393   60183 logs.go:274] 0 containers: []
	W0725 13:23:38.066407   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:23:38.066480   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:23:38.094341   60183 logs.go:274] 0 containers: []
	W0725 13:23:38.094354   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:23:38.094413   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:23:38.123151   60183 logs.go:274] 0 containers: []
	W0725 13:23:38.123163   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:23:38.123228   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:23:38.151883   60183 logs.go:274] 0 containers: []
	W0725 13:23:38.151894   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:23:38.151901   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:23:38.151913   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:23:38.164057   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:23:38.164070   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:23:38.217391   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:23:38.217404   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:23:38.217411   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:23:38.232266   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:23:38.232279   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:23:40.284014   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051664102s)
	I0725 13:23:40.284120   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:23:40.284127   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:23:42.824116   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:23:42.916418   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:23:42.945082   60183 logs.go:274] 0 containers: []
	W0725 13:23:42.945095   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:23:42.945161   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:23:42.976210   60183 logs.go:274] 0 containers: []
	W0725 13:23:42.976221   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:23:42.976283   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:23:43.004760   60183 logs.go:274] 0 containers: []
	W0725 13:23:43.004772   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:23:43.004828   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:23:43.034045   60183 logs.go:274] 0 containers: []
	W0725 13:23:43.034057   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:23:43.034136   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:23:43.063735   60183 logs.go:274] 0 containers: []
	W0725 13:23:43.063747   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:23:43.063807   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:23:43.092971   60183 logs.go:274] 0 containers: []
	W0725 13:23:43.092984   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:23:43.093046   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:23:43.122089   60183 logs.go:274] 0 containers: []
	W0725 13:23:43.122102   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:23:43.122165   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:23:43.151913   60183 logs.go:274] 0 containers: []
	W0725 13:23:43.151927   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:23:43.151933   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:23:43.151940   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:23:43.191482   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:23:43.191500   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:23:43.204833   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:23:43.204851   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:23:43.266710   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:23:43.266721   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:23:43.266728   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:23:43.280481   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:23:43.280493   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:23:45.335689   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055123517s)
	I0725 13:23:47.836914   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:23:47.916508   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:23:47.947483   60183 logs.go:274] 0 containers: []
	W0725 13:23:47.947496   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:23:47.947555   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:23:47.976844   60183 logs.go:274] 0 containers: []
	W0725 13:23:47.976858   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:23:47.976921   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:23:48.006778   60183 logs.go:274] 0 containers: []
	W0725 13:23:48.006790   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:23:48.006847   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:23:48.036361   60183 logs.go:274] 0 containers: []
	W0725 13:23:48.036374   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:23:48.036438   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:23:48.066116   60183 logs.go:274] 0 containers: []
	W0725 13:23:48.066132   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:23:48.066196   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:23:48.095236   60183 logs.go:274] 0 containers: []
	W0725 13:23:48.095249   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:23:48.095308   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:23:48.124615   60183 logs.go:274] 0 containers: []
	W0725 13:23:48.124627   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:23:48.124684   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:23:48.154933   60183 logs.go:274] 0 containers: []
	W0725 13:23:48.154945   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:23:48.154951   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:23:48.154958   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:23:48.196269   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:23:48.196282   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:23:48.208071   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:23:48.208082   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:23:48.261791   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:23:48.261801   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:23:48.261807   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:23:48.275612   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:23:48.275624   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:23:50.328143   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05244781s)
	I0725 13:23:52.830598   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:23:52.916555   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:23:52.946432   60183 logs.go:274] 0 containers: []
	W0725 13:23:52.946449   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:23:52.946523   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:23:52.976593   60183 logs.go:274] 0 containers: []
	W0725 13:23:52.976605   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:23:52.976673   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:23:53.010114   60183 logs.go:274] 0 containers: []
	W0725 13:23:53.010126   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:23:53.010182   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:23:53.038708   60183 logs.go:274] 0 containers: []
	W0725 13:23:53.038720   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:23:53.038781   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:23:53.067454   60183 logs.go:274] 0 containers: []
	W0725 13:23:53.067466   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:23:53.067528   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:23:53.095959   60183 logs.go:274] 0 containers: []
	W0725 13:23:53.095971   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:23:53.096030   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:23:53.126372   60183 logs.go:274] 0 containers: []
	W0725 13:23:53.126385   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:23:53.126450   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:23:53.155509   60183 logs.go:274] 0 containers: []
	W0725 13:23:53.155523   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:23:53.155530   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:23:53.155537   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:23:53.195731   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:23:53.195744   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:23:53.207459   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:23:53.207473   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:23:53.260748   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:23:53.260768   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:23:53.260775   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:23:53.274157   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:23:53.274169   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:23:55.324854   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.0506133s)
	I0725 13:23:57.825573   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:23:57.917185   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:23:57.947745   60183 logs.go:274] 0 containers: []
	W0725 13:23:57.947758   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:23:57.947814   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:23:57.975616   60183 logs.go:274] 0 containers: []
	W0725 13:23:57.975628   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:23:57.975690   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:23:58.004104   60183 logs.go:274] 0 containers: []
	W0725 13:23:58.004116   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:23:58.004180   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:23:58.032249   60183 logs.go:274] 0 containers: []
	W0725 13:23:58.032261   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:23:58.032330   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:23:58.062006   60183 logs.go:274] 0 containers: []
	W0725 13:23:58.062021   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:23:58.062074   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:23:58.090537   60183 logs.go:274] 0 containers: []
	W0725 13:23:58.090548   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:23:58.090607   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:23:58.119003   60183 logs.go:274] 0 containers: []
	W0725 13:23:58.119015   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:23:58.119071   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:23:58.149646   60183 logs.go:274] 0 containers: []
	W0725 13:23:58.149660   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:23:58.149668   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:23:58.149677   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:24:00.207223   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057470129s)
	I0725 13:24:00.207346   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:24:00.207356   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:24:00.246278   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:24:00.246294   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:24:00.257799   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:24:00.257812   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:24:00.311151   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:24:00.311187   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:24:00.311201   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:24:02.825458   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:24:02.916753   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:24:02.946058   60183 logs.go:274] 0 containers: []
	W0725 13:24:02.946070   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:24:02.946127   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:24:02.974437   60183 logs.go:274] 0 containers: []
	W0725 13:24:02.974450   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:24:02.974506   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:24:03.004307   60183 logs.go:274] 0 containers: []
	W0725 13:24:03.004320   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:24:03.004405   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:24:03.034237   60183 logs.go:274] 0 containers: []
	W0725 13:24:03.034248   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:24:03.034308   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:24:03.066725   60183 logs.go:274] 0 containers: []
	W0725 13:24:03.066737   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:24:03.066792   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:24:03.097377   60183 logs.go:274] 0 containers: []
	W0725 13:24:03.097389   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:24:03.097449   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:24:03.126782   60183 logs.go:274] 0 containers: []
	W0725 13:24:03.126794   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:24:03.126857   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:24:03.155129   60183 logs.go:274] 0 containers: []
	W0725 13:24:03.155142   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:24:03.155149   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:24:03.155155   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:24:03.195481   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:24:03.195494   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:24:03.206820   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:24:03.206835   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:24:03.259802   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:24:03.259812   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:24:03.259818   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:24:03.273974   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:24:03.273987   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:24:05.328003   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053944862s)
	I0725 13:24:07.828361   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:24:07.917072   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:24:07.949966   60183 logs.go:274] 0 containers: []
	W0725 13:24:07.949983   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:24:07.950052   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:24:07.988332   60183 logs.go:274] 0 containers: []
	W0725 13:24:07.988346   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:24:07.988409   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:24:08.027678   60183 logs.go:274] 0 containers: []
	W0725 13:24:08.027690   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:24:08.027756   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:24:08.059823   60183 logs.go:274] 0 containers: []
	W0725 13:24:08.059836   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:24:08.059905   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:24:08.093298   60183 logs.go:274] 0 containers: []
	W0725 13:24:08.093311   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:24:08.093374   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:24:08.131132   60183 logs.go:274] 0 containers: []
	W0725 13:24:08.131144   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:24:08.131200   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:24:08.163873   60183 logs.go:274] 0 containers: []
	W0725 13:24:08.163888   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:24:08.163950   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:24:08.195373   60183 logs.go:274] 0 containers: []
	W0725 13:24:08.195386   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:24:08.195392   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:24:08.195399   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:24:08.239634   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:24:08.239650   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:24:08.257904   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:24:08.257919   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:24:08.319885   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:24:08.319898   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:24:08.319904   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:24:08.336710   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:24:08.336724   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:24:10.401425   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.06462793s)
	I0725 13:24:12.901765   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:24:12.917037   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:24:12.952668   60183 logs.go:274] 0 containers: []
	W0725 13:24:12.952681   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:24:12.952736   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:24:12.982943   60183 logs.go:274] 0 containers: []
	W0725 13:24:12.982955   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:24:12.983017   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:24:13.013797   60183 logs.go:274] 0 containers: []
	W0725 13:24:13.013810   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:24:13.013876   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:24:13.044254   60183 logs.go:274] 0 containers: []
	W0725 13:24:13.044267   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:24:13.044326   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:24:13.074217   60183 logs.go:274] 0 containers: []
	W0725 13:24:13.074230   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:24:13.074293   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:24:13.109560   60183 logs.go:274] 0 containers: []
	W0725 13:24:13.109573   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:24:13.109636   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:24:13.140893   60183 logs.go:274] 0 containers: []
	W0725 13:24:13.140906   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:24:13.140965   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:24:13.176452   60183 logs.go:274] 0 containers: []
	W0725 13:24:13.176466   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:24:13.176474   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:24:13.176482   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:24:13.221236   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:24:13.221274   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:24:13.234259   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:24:13.234274   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:24:13.291367   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:24:13.291377   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:24:13.291384   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:24:13.306619   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:24:13.306632   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:24:15.363070   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056365342s)
	I0725 13:24:17.865239   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:24:17.917268   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:24:17.948026   60183 logs.go:274] 0 containers: []
	W0725 13:24:17.948038   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:24:17.948094   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:24:17.978209   60183 logs.go:274] 0 containers: []
	W0725 13:24:17.978222   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:24:17.978280   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:24:18.006707   60183 logs.go:274] 0 containers: []
	W0725 13:24:18.006718   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:24:18.006775   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:24:18.037659   60183 logs.go:274] 0 containers: []
	W0725 13:24:18.037671   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:24:18.037726   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:24:18.065998   60183 logs.go:274] 0 containers: []
	W0725 13:24:18.066016   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:24:18.066075   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:24:18.096217   60183 logs.go:274] 0 containers: []
	W0725 13:24:18.096230   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:24:18.096286   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:24:18.126356   60183 logs.go:274] 0 containers: []
	W0725 13:24:18.126369   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:24:18.126427   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:24:18.155056   60183 logs.go:274] 0 containers: []
	W0725 13:24:18.155068   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:24:18.155074   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:24:18.155088   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:24:18.210436   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:24:18.210447   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:24:18.210455   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:24:18.224505   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:24:18.224517   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:24:20.280940   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056351635s)
	I0725 13:24:20.281045   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:24:20.281052   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:24:20.322100   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:24:20.322118   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:24:22.836188   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:24:22.918171   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:24:22.949256   60183 logs.go:274] 0 containers: []
	W0725 13:24:22.949269   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:24:22.949330   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:24:22.979856   60183 logs.go:274] 0 containers: []
	W0725 13:24:22.979872   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:24:22.979930   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:24:23.009212   60183 logs.go:274] 0 containers: []
	W0725 13:24:23.009224   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:24:23.009280   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:24:23.040003   60183 logs.go:274] 0 containers: []
	W0725 13:24:23.040014   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:24:23.040069   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:24:23.070063   60183 logs.go:274] 0 containers: []
	W0725 13:24:23.070075   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:24:23.070129   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:24:23.098168   60183 logs.go:274] 0 containers: []
	W0725 13:24:23.098181   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:24:23.098239   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:24:23.127379   60183 logs.go:274] 0 containers: []
	W0725 13:24:23.127392   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:24:23.127449   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:24:23.156617   60183 logs.go:274] 0 containers: []
	W0725 13:24:23.156630   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:24:23.156637   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:24:23.156644   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:24:23.208837   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:24:23.208847   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:24:23.208854   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:24:23.222431   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:24:23.222443   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:24:25.276610   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054096015s)
	I0725 13:24:25.276716   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:24:25.276723   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:24:25.317113   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:24:25.317132   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:24:27.831788   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:24:27.917665   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:24:27.951671   60183 logs.go:274] 0 containers: []
	W0725 13:24:27.951683   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:24:27.951742   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:24:27.981792   60183 logs.go:274] 0 containers: []
	W0725 13:24:27.981805   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:24:27.981861   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:24:28.010660   60183 logs.go:274] 0 containers: []
	W0725 13:24:28.010675   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:24:28.010745   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:24:28.039897   60183 logs.go:274] 0 containers: []
	W0725 13:24:28.039910   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:24:28.039966   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:24:28.069312   60183 logs.go:274] 0 containers: []
	W0725 13:24:28.069324   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:24:28.069379   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:24:28.098531   60183 logs.go:274] 0 containers: []
	W0725 13:24:28.098544   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:24:28.098599   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:24:28.127653   60183 logs.go:274] 0 containers: []
	W0725 13:24:28.127666   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:24:28.127720   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:24:28.156147   60183 logs.go:274] 0 containers: []
	W0725 13:24:28.156162   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:24:28.156169   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:24:28.156177   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:24:28.202017   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:24:28.202037   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:24:28.219890   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:24:28.219905   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:24:28.279250   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:24:28.279263   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:24:28.279270   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:24:28.294488   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:24:28.294502   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:24:30.350962   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056388033s)
	I0725 13:24:32.851327   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:24:32.919793   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:24:32.950452   60183 logs.go:274] 0 containers: []
	W0725 13:24:32.950464   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:24:32.950519   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:24:32.978393   60183 logs.go:274] 0 containers: []
	W0725 13:24:32.978405   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:24:32.978461   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:24:33.008027   60183 logs.go:274] 0 containers: []
	W0725 13:24:33.008039   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:24:33.008095   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:24:33.038231   60183 logs.go:274] 0 containers: []
	W0725 13:24:33.038243   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:24:33.038297   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:24:33.068037   60183 logs.go:274] 0 containers: []
	W0725 13:24:33.068049   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:24:33.068108   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:24:33.098144   60183 logs.go:274] 0 containers: []
	W0725 13:24:33.098156   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:24:33.098219   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:24:33.131474   60183 logs.go:274] 0 containers: []
	W0725 13:24:33.131488   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:24:33.131551   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:24:33.163043   60183 logs.go:274] 0 containers: []
	W0725 13:24:33.163057   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:24:33.163064   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:24:33.163071   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:24:33.225128   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:24:33.225142   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:24:33.225148   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:24:33.240300   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:24:33.240316   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:24:35.304650   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.064263654s)
	I0725 13:24:35.304758   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:24:35.304765   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:24:35.359741   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:24:35.359783   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:24:37.873389   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:24:37.918418   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:24:37.955388   60183 logs.go:274] 0 containers: []
	W0725 13:24:37.955407   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:24:37.955466   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:24:37.996813   60183 logs.go:274] 0 containers: []
	W0725 13:24:37.996824   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:24:37.996887   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:24:38.029638   60183 logs.go:274] 0 containers: []
	W0725 13:24:38.029653   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:24:38.029717   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:24:38.063668   60183 logs.go:274] 0 containers: []
	W0725 13:24:38.063681   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:24:38.063734   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:24:38.097181   60183 logs.go:274] 0 containers: []
	W0725 13:24:38.097193   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:24:38.097248   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:24:38.128322   60183 logs.go:274] 0 containers: []
	W0725 13:24:38.128337   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:24:38.128423   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:24:38.161589   60183 logs.go:274] 0 containers: []
	W0725 13:24:38.161605   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:24:38.161667   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:24:38.199476   60183 logs.go:274] 0 containers: []
	W0725 13:24:38.199488   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:24:38.199495   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:24:38.199501   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:24:38.263856   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:24:38.263867   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:24:38.263874   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:24:38.278755   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:24:38.278771   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:24:40.336830   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05798511s)
	I0725 13:24:40.336946   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:24:40.336958   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:24:40.385712   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:24:40.385733   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:24:42.900882   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:24:42.917988   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:24:42.955271   60183 logs.go:274] 0 containers: []
	W0725 13:24:42.955286   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:24:42.955386   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:24:42.990842   60183 logs.go:274] 0 containers: []
	W0725 13:24:42.990861   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:24:42.990927   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:24:43.024751   60183 logs.go:274] 0 containers: []
	W0725 13:24:43.024763   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:24:43.024824   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:24:43.061278   60183 logs.go:274] 0 containers: []
	W0725 13:24:43.061296   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:24:43.061361   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:24:43.091254   60183 logs.go:274] 0 containers: []
	W0725 13:24:43.091266   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:24:43.091323   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:24:43.121299   60183 logs.go:274] 0 containers: []
	W0725 13:24:43.121311   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:24:43.121385   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:24:43.150795   60183 logs.go:274] 0 containers: []
	W0725 13:24:43.150808   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:24:43.150899   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:24:43.184239   60183 logs.go:274] 0 containers: []
	W0725 13:24:43.184251   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:24:43.184258   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:24:43.184265   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:24:43.201029   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:24:43.201043   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:24:45.254970   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05385567s)
	I0725 13:24:45.255075   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:24:45.255081   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:24:45.294400   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:24:45.294415   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:24:45.306088   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:24:45.306101   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:24:45.358898   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:24:47.859143   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:24:47.918290   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:24:47.948745   60183 logs.go:274] 0 containers: []
	W0725 13:24:47.948757   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:24:47.948813   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:24:47.978054   60183 logs.go:274] 0 containers: []
	W0725 13:24:47.978065   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:24:47.978125   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:24:48.006969   60183 logs.go:274] 0 containers: []
	W0725 13:24:48.006982   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:24:48.007039   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:24:48.037417   60183 logs.go:274] 0 containers: []
	W0725 13:24:48.037433   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:24:48.037509   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:24:48.067050   60183 logs.go:274] 0 containers: []
	W0725 13:24:48.067063   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:24:48.067118   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:24:48.095883   60183 logs.go:274] 0 containers: []
	W0725 13:24:48.095896   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:24:48.095950   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:24:48.123973   60183 logs.go:274] 0 containers: []
	W0725 13:24:48.123985   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:24:48.124042   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:24:48.152316   60183 logs.go:274] 0 containers: []
	W0725 13:24:48.152332   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:24:48.152341   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:24:48.152349   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:24:48.194780   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:24:48.194796   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:24:48.207031   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:24:48.207044   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:24:48.260819   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:24:48.260831   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:24:48.260839   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:24:48.274383   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:24:48.274397   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:24:50.326332   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051862489s)
	I0725 13:24:52.827101   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:24:52.918437   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:24:52.951150   60183 logs.go:274] 0 containers: []
	W0725 13:24:52.951162   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:24:52.951220   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:24:52.985739   60183 logs.go:274] 0 containers: []
	W0725 13:24:52.985753   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:24:52.985815   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:24:53.016602   60183 logs.go:274] 0 containers: []
	W0725 13:24:53.016612   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:24:53.016659   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:24:53.046448   60183 logs.go:274] 0 containers: []
	W0725 13:24:53.046459   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:24:53.046517   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:24:53.078374   60183 logs.go:274] 0 containers: []
	W0725 13:24:53.078390   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:24:53.078466   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:24:53.123048   60183 logs.go:274] 0 containers: []
	W0725 13:24:53.123061   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:24:53.123123   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:24:53.154579   60183 logs.go:274] 0 containers: []
	W0725 13:24:53.154591   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:24:53.154646   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:24:53.195527   60183 logs.go:274] 0 containers: []
	W0725 13:24:53.195542   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:24:53.195551   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:24:53.195559   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:24:53.241474   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:24:53.241487   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:24:53.253883   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:24:53.253895   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:24:53.311986   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:24:53.312000   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:24:53.312008   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:24:53.327743   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:24:53.327764   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:24:55.393400   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.065560615s)
	I0725 13:24:57.895862   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:24:57.919394   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:24:57.951377   60183 logs.go:274] 0 containers: []
	W0725 13:24:57.951389   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:24:57.951444   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:24:57.979788   60183 logs.go:274] 0 containers: []
	W0725 13:24:57.979801   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:24:57.979860   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:24:58.008898   60183 logs.go:274] 0 containers: []
	W0725 13:24:58.008911   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:24:58.008967   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:24:58.037016   60183 logs.go:274] 0 containers: []
	W0725 13:24:58.037029   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:24:58.037089   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:24:58.066009   60183 logs.go:274] 0 containers: []
	W0725 13:24:58.066021   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:24:58.066079   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:24:58.093711   60183 logs.go:274] 0 containers: []
	W0725 13:24:58.093724   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:24:58.093788   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:24:58.123557   60183 logs.go:274] 0 containers: []
	W0725 13:24:58.123570   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:24:58.123626   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:24:58.151991   60183 logs.go:274] 0 containers: []
	W0725 13:24:58.152005   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:24:58.152011   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:24:58.152018   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:24:58.191731   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:24:58.191751   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:24:58.205346   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:24:58.205362   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:24:58.258841   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:24:58.258853   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:24:58.258859   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:24:58.272311   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:24:58.272323   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:25:00.327133   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054738791s)
	I0725 13:25:02.829132   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:25:02.920662   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:25:02.950188   60183 logs.go:274] 0 containers: []
	W0725 13:25:02.950201   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:25:02.950260   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:25:02.978580   60183 logs.go:274] 0 containers: []
	W0725 13:25:02.978592   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:25:02.978646   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:25:03.006563   60183 logs.go:274] 0 containers: []
	W0725 13:25:03.006576   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:25:03.006629   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:25:03.033788   60183 logs.go:274] 0 containers: []
	W0725 13:25:03.033801   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:25:03.033855   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:25:03.062179   60183 logs.go:274] 0 containers: []
	W0725 13:25:03.062191   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:25:03.062245   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:25:03.091169   60183 logs.go:274] 0 containers: []
	W0725 13:25:03.091189   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:25:03.091248   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:25:03.120134   60183 logs.go:274] 0 containers: []
	W0725 13:25:03.120147   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:25:03.120204   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:25:03.148569   60183 logs.go:274] 0 containers: []
	W0725 13:25:03.148582   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:25:03.148588   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:25:03.148595   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:25:05.206723   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.058055845s)
	I0725 13:25:05.206827   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:25:05.206834   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:25:05.244693   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:25:05.244707   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:25:05.256822   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:25:05.256833   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:25:05.308516   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:25:05.308531   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:25:05.308543   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:25:07.823907   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:25:07.918681   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:25:07.951167   60183 logs.go:274] 0 containers: []
	W0725 13:25:07.951179   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:25:07.951234   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:25:07.979414   60183 logs.go:274] 0 containers: []
	W0725 13:25:07.979427   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:25:07.979484   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:25:08.009108   60183 logs.go:274] 0 containers: []
	W0725 13:25:08.009120   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:25:08.009178   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:25:08.038053   60183 logs.go:274] 0 containers: []
	W0725 13:25:08.038070   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:25:08.038126   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:25:08.066112   60183 logs.go:274] 0 containers: []
	W0725 13:25:08.066124   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:25:08.066178   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:25:08.094804   60183 logs.go:274] 0 containers: []
	W0725 13:25:08.094817   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:25:08.094874   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:25:08.123943   60183 logs.go:274] 0 containers: []
	W0725 13:25:08.123955   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:25:08.124011   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:25:08.153447   60183 logs.go:274] 0 containers: []
	W0725 13:25:08.153460   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:25:08.153467   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:25:08.153474   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:25:10.205133   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051587517s)
	I0725 13:25:10.205247   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:25:10.205256   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:25:10.244085   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:25:10.244097   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:25:10.256079   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:25:10.256095   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:25:10.307417   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:25:10.307428   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:25:10.307435   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:25:12.823093   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:25:12.920941   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:25:12.952408   60183 logs.go:274] 0 containers: []
	W0725 13:25:12.952420   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:25:12.952476   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:25:12.981252   60183 logs.go:274] 0 containers: []
	W0725 13:25:12.981269   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:25:12.981333   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:25:13.010436   60183 logs.go:274] 0 containers: []
	W0725 13:25:13.010447   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:25:13.010511   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:25:13.038121   60183 logs.go:274] 0 containers: []
	W0725 13:25:13.038141   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:25:13.038208   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:25:13.068013   60183 logs.go:274] 0 containers: []
	W0725 13:25:13.068025   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:25:13.068084   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:25:13.098322   60183 logs.go:274] 0 containers: []
	W0725 13:25:13.098334   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:25:13.098389   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:25:13.128619   60183 logs.go:274] 0 containers: []
	W0725 13:25:13.128634   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:25:13.128701   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:25:13.157149   60183 logs.go:274] 0 containers: []
	W0725 13:25:13.157166   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:25:13.157179   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:25:13.157190   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:25:13.197722   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:25:13.197738   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:25:13.211125   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:25:13.211147   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:25:13.263333   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:25:13.263343   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:25:13.263350   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:25:13.276992   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:25:13.277004   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:25:15.333288   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056210609s)
	I0725 13:25:17.835729   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:25:17.921071   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:25:17.952398   60183 logs.go:274] 0 containers: []
	W0725 13:25:17.952411   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:25:17.952466   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:25:17.983512   60183 logs.go:274] 0 containers: []
	W0725 13:25:17.983524   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:25:17.983579   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:25:18.012155   60183 logs.go:274] 0 containers: []
	W0725 13:25:18.012166   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:25:18.012223   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:25:18.041437   60183 logs.go:274] 0 containers: []
	W0725 13:25:18.041450   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:25:18.041509   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:25:18.071064   60183 logs.go:274] 0 containers: []
	W0725 13:25:18.071076   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:25:18.071133   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:25:18.100563   60183 logs.go:274] 0 containers: []
	W0725 13:25:18.100576   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:25:18.100632   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:25:18.130038   60183 logs.go:274] 0 containers: []
	W0725 13:25:18.130065   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:25:18.130222   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:25:18.160243   60183 logs.go:274] 0 containers: []
	W0725 13:25:18.160255   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:25:18.160262   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:25:18.160270   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:25:20.214840   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054499324s)
	I0725 13:25:20.214949   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:25:20.214957   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:25:20.254381   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:25:20.254393   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:25:20.265948   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:25:20.265960   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:25:20.317418   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:25:20.317429   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:25:20.317435   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:25:22.833394   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:25:22.919747   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:25:22.949763   60183 logs.go:274] 0 containers: []
	W0725 13:25:22.949775   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:25:22.949833   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:25:22.979326   60183 logs.go:274] 0 containers: []
	W0725 13:25:22.979338   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:25:22.979394   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:25:23.008775   60183 logs.go:274] 0 containers: []
	W0725 13:25:23.008789   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:25:23.008847   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:25:23.038068   60183 logs.go:274] 0 containers: []
	W0725 13:25:23.038098   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:25:23.038155   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:25:23.066885   60183 logs.go:274] 0 containers: []
	W0725 13:25:23.066899   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:25:23.066948   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:25:23.095779   60183 logs.go:274] 0 containers: []
	W0725 13:25:23.095792   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:25:23.095847   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:25:23.124721   60183 logs.go:274] 0 containers: []
	W0725 13:25:23.124733   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:25:23.124795   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:25:23.154730   60183 logs.go:274] 0 containers: []
	W0725 13:25:23.154742   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:25:23.154749   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:25:23.154757   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:25:23.194256   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:25:23.194269   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:25:23.205440   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:25:23.205452   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:25:23.257296   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:25:23.257307   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:25:23.257314   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:25:23.270751   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:25:23.270762   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:25:25.325770   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054934272s)
	I0725 13:25:27.826256   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:25:27.920179   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:25:27.950490   60183 logs.go:274] 0 containers: []
	W0725 13:25:27.950501   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:25:27.950549   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:25:27.983247   60183 logs.go:274] 0 containers: []
	W0725 13:25:27.983258   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:25:27.983323   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:25:28.019768   60183 logs.go:274] 0 containers: []
	W0725 13:25:28.019777   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:25:28.019833   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:25:28.052617   60183 logs.go:274] 0 containers: []
	W0725 13:25:28.052630   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:25:28.052685   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:25:28.082546   60183 logs.go:274] 0 containers: []
	W0725 13:25:28.082559   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:25:28.082614   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:25:28.111799   60183 logs.go:274] 0 containers: []
	W0725 13:25:28.111814   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:25:28.111884   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:25:28.142096   60183 logs.go:274] 0 containers: []
	W0725 13:25:28.142112   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:25:28.142180   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:25:28.173212   60183 logs.go:274] 0 containers: []
	W0725 13:25:28.173223   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:25:28.173230   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:25:28.173237   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:25:28.213670   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:25:28.213689   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:25:28.228963   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:25:28.228980   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:25:28.292093   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:25:28.292105   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:25:28.292112   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:25:28.305882   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:25:28.305895   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:25:30.357483   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051516392s)
	I0725 13:25:32.857801   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:25:32.920405   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:25:32.951495   60183 logs.go:274] 0 containers: []
	W0725 13:25:32.951507   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:25:32.951573   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:25:32.982559   60183 logs.go:274] 0 containers: []
	W0725 13:25:32.982570   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:25:32.982655   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:25:33.014304   60183 logs.go:274] 0 containers: []
	W0725 13:25:33.014316   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:25:33.014372   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:25:33.045031   60183 logs.go:274] 0 containers: []
	W0725 13:25:33.045045   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:25:33.045103   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:25:33.076125   60183 logs.go:274] 0 containers: []
	W0725 13:25:33.076138   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:25:33.076193   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:25:33.107166   60183 logs.go:274] 0 containers: []
	W0725 13:25:33.107180   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:25:33.107235   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:25:33.137456   60183 logs.go:274] 0 containers: []
	W0725 13:25:33.137469   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:25:33.137530   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:25:33.171122   60183 logs.go:274] 0 containers: []
	W0725 13:25:33.171134   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:25:33.171155   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:25:33.171184   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:25:35.227732   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056475658s)
	I0725 13:25:35.227839   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:25:35.227845   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:25:35.270608   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:25:35.270628   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:25:35.284387   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:25:35.284404   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:25:35.362794   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:25:35.362805   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:25:35.362811   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:25:37.877762   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:25:37.919828   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:25:37.957054   60183 logs.go:274] 0 containers: []
	W0725 13:25:37.957066   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:25:37.957124   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:25:38.005583   60183 logs.go:274] 0 containers: []
	W0725 13:25:38.005599   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:25:38.005665   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:25:38.042986   60183 logs.go:274] 0 containers: []
	W0725 13:25:38.043000   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:25:38.043073   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:25:38.077489   60183 logs.go:274] 0 containers: []
	W0725 13:25:38.077504   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:25:38.077622   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:25:38.109717   60183 logs.go:274] 0 containers: []
	W0725 13:25:38.109730   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:25:38.109784   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:25:38.144432   60183 logs.go:274] 0 containers: []
	W0725 13:25:38.144445   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:25:38.144510   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:25:38.178094   60183 logs.go:274] 0 containers: []
	W0725 13:25:38.178107   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:25:38.178161   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:25:38.211479   60183 logs.go:274] 0 containers: []
	W0725 13:25:38.211490   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:25:38.211497   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:25:38.211503   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:25:38.265568   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:25:38.265587   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:25:38.278006   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:25:38.278019   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:25:38.334734   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:25:38.334747   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:25:38.334754   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:25:38.350317   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:25:38.350336   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:25:40.408717   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.058308027s)
	I0725 13:25:42.909061   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:25:42.919708   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:25:42.980785   60183 logs.go:274] 0 containers: []
	W0725 13:25:42.980799   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:25:42.980859   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:25:43.016265   60183 logs.go:274] 0 containers: []
	W0725 13:25:43.016279   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:25:43.016344   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:25:43.052595   60183 logs.go:274] 0 containers: []
	W0725 13:25:43.052611   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:25:43.052679   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:25:43.094825   60183 logs.go:274] 0 containers: []
	W0725 13:25:43.094838   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:25:43.094909   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:25:43.143113   60183 logs.go:274] 0 containers: []
	W0725 13:25:43.143124   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:25:43.143182   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:25:43.192235   60183 logs.go:274] 0 containers: []
	W0725 13:25:43.192254   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:25:43.192331   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:25:43.238964   60183 logs.go:274] 0 containers: []
	W0725 13:25:43.238979   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:25:43.239042   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:25:43.274817   60183 logs.go:274] 0 containers: []
	W0725 13:25:43.274831   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:25:43.274839   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:25:43.274847   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:25:43.329576   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:25:43.329598   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:25:43.346283   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:25:43.346298   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:25:43.419982   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:25:43.419995   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:25:43.420004   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:25:43.437775   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:25:43.437791   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:25:45.505908   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.06804555s)
	I0725 13:25:48.008278   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:25:48.420285   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:25:48.450824   60183 logs.go:274] 0 containers: []
	W0725 13:25:48.450836   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:25:48.450900   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:25:48.482052   60183 logs.go:274] 0 containers: []
	W0725 13:25:48.482064   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:25:48.482186   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:25:48.514982   60183 logs.go:274] 0 containers: []
	W0725 13:25:48.514994   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:25:48.515072   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:25:48.546117   60183 logs.go:274] 0 containers: []
	W0725 13:25:48.546129   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:25:48.546191   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:25:48.579208   60183 logs.go:274] 0 containers: []
	W0725 13:25:48.579220   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:25:48.579276   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:25:48.607229   60183 logs.go:274] 0 containers: []
	W0725 13:25:48.607241   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:25:48.607300   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:25:48.636486   60183 logs.go:274] 0 containers: []
	W0725 13:25:48.636497   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:25:48.636550   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:25:48.669210   60183 logs.go:274] 0 containers: []
	W0725 13:25:48.669237   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:25:48.669245   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:25:48.669252   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:25:48.716000   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:25:48.716028   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:25:48.729138   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:25:48.729152   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:25:48.789074   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:25:48.789089   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:25:48.789104   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:25:48.804829   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:25:48.804842   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:25:50.862357   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057442124s)
	I0725 13:25:53.362696   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:25:53.421725   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:25:53.453584   60183 logs.go:274] 0 containers: []
	W0725 13:25:53.453598   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:25:53.453662   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:25:53.488992   60183 logs.go:274] 0 containers: []
	W0725 13:25:53.489011   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:25:53.489073   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:25:53.522722   60183 logs.go:274] 0 containers: []
	W0725 13:25:53.522737   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:25:53.522798   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:25:53.556524   60183 logs.go:274] 0 containers: []
	W0725 13:25:53.556538   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:25:53.556602   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:25:53.589371   60183 logs.go:274] 0 containers: []
	W0725 13:25:53.589387   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:25:53.589457   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:25:53.624050   60183 logs.go:274] 0 containers: []
	W0725 13:25:53.624066   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:25:53.624133   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:25:53.658748   60183 logs.go:274] 0 containers: []
	W0725 13:25:53.658764   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:25:53.658832   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:25:53.694048   60183 logs.go:274] 0 containers: []
	W0725 13:25:53.702922   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:25:53.702941   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:25:53.702952   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:25:53.765906   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:25:53.765920   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:25:53.765929   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:25:53.782058   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:25:53.782071   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:25:55.834907   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052764219s)
	I0725 13:25:55.835028   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:25:55.835036   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:25:55.883806   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:25:55.883829   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:25:58.398740   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:25:58.420503   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:25:58.451719   60183 logs.go:274] 0 containers: []
	W0725 13:25:58.451732   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:25:58.451790   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:25:58.479382   60183 logs.go:274] 0 containers: []
	W0725 13:25:58.479395   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:25:58.479453   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:25:58.508150   60183 logs.go:274] 0 containers: []
	W0725 13:25:58.508161   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:25:58.508215   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:25:58.536146   60183 logs.go:274] 0 containers: []
	W0725 13:25:58.536157   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:25:58.536214   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:25:58.564553   60183 logs.go:274] 0 containers: []
	W0725 13:25:58.564564   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:25:58.564620   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:25:58.592277   60183 logs.go:274] 0 containers: []
	W0725 13:25:58.592289   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:25:58.592343   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:25:58.619845   60183 logs.go:274] 0 containers: []
	W0725 13:25:58.619856   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:25:58.619920   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:25:58.647798   60183 logs.go:274] 0 containers: []
	W0725 13:25:58.647811   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:25:58.647819   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:25:58.647826   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:25:58.664083   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:25:58.664096   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:26:00.716784   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052599264s)
	I0725 13:26:00.716896   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:26:00.716905   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:26:00.769152   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:26:00.769177   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:26:00.784253   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:26:00.784267   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:26:00.852117   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:26:03.352401   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:26:03.363454   60183 kubeadm.go:630] restartCluster took 4m4.845852269s
	W0725 13:26:03.363551   60183 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0725 13:26:03.363569   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0725 13:26:03.789422   60183 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 13:26:03.801884   60183 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 13:26:03.809399   60183 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0725 13:26:03.809450   60183 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 13:26:03.821172   60183 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 13:26:03.821233   60183 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0725 13:28:00.078228   60183 out.go:204]   - Generating certificates and keys ...
	I0725 13:28:00.141595   60183 out.go:204]   - Booting up control plane ...
	W0725 13:28:00.145489   60183 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0725 13:28:00.145526   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0725 13:28:00.568444   60183 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 13:28:00.578598   60183 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0725 13:28:00.578655   60183 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 13:28:00.586062   60183 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 13:28:00.586084   60183 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0725 13:28:01.303869   60183 out.go:204]   - Generating certificates and keys ...
	I0725 13:28:02.819781   60183 out.go:204]   - Booting up control plane ...
	I0725 13:29:57.738590   60183 kubeadm.go:397] StartCluster complete in 7m59.250327249s
	I0725 13:29:57.738667   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:29:57.767241   60183 logs.go:274] 0 containers: []
	W0725 13:29:57.767253   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:29:57.767311   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:29:57.795435   60183 logs.go:274] 0 containers: []
	W0725 13:29:57.795448   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:29:57.795503   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:29:57.824559   60183 logs.go:274] 0 containers: []
	W0725 13:29:57.824581   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:29:57.824642   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:29:57.854900   60183 logs.go:274] 0 containers: []
	W0725 13:29:57.854912   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:29:57.854967   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:29:57.883684   60183 logs.go:274] 0 containers: []
	W0725 13:29:57.883695   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:29:57.883747   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:29:57.917022   60183 logs.go:274] 0 containers: []
	W0725 13:29:57.917034   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:29:57.917091   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:29:57.948784   60183 logs.go:274] 0 containers: []
	W0725 13:29:57.948800   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:29:57.948858   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:29:57.982242   60183 logs.go:274] 0 containers: []
	W0725 13:29:57.982254   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:29:57.982261   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:29:57.982268   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:30:00.039435   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.0570932s)
	I0725 13:30:00.039559   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:30:00.039566   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:30:00.078912   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:30:00.078928   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:30:00.090262   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:30:00.090278   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:30:00.144105   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:30:00.144118   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:30:00.144124   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	W0725 13:30:00.158368   60183 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0725 13:30:00.158385   60183 out.go:239] * 
	* 
	W0725 13:30:00.158482   60183 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0725 13:30:00.158497   60183 out.go:239] * 
	* 
	W0725 13:30:00.159027   60183 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 13:30:00.221624   60183 out.go:177] 
	W0725 13:30:00.264001   60183 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0725 13:30:00.264127   60183 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0725 13:30:00.264205   60183 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0725 13:30:00.327781   60183 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p old-k8s-version-20220725131610-44543 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220725131610-44543
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220725131610-44543:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6935d4927a39f4ec4845798aff773e95bc8ae838958e9958b965359cf4e99f8c",
	        "Created": "2022-07-25T20:16:17.246440867Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 250323,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-25T20:21:54.974897982Z",
	            "FinishedAt": "2022-07-25T20:21:52.153635121Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/6935d4927a39f4ec4845798aff773e95bc8ae838958e9958b965359cf4e99f8c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6935d4927a39f4ec4845798aff773e95bc8ae838958e9958b965359cf4e99f8c/hostname",
	        "HostsPath": "/var/lib/docker/containers/6935d4927a39f4ec4845798aff773e95bc8ae838958e9958b965359cf4e99f8c/hosts",
	        "LogPath": "/var/lib/docker/containers/6935d4927a39f4ec4845798aff773e95bc8ae838958e9958b965359cf4e99f8c/6935d4927a39f4ec4845798aff773e95bc8ae838958e9958b965359cf4e99f8c-json.log",
	        "Name": "/old-k8s-version-20220725131610-44543",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220725131610-44543:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220725131610-44543",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f15eeb93498eb9b562fa0404f6a7af49ce3b2ca697ba1b4c09698c8f0f5924f2-init/diff:/var/lib/docker/overlay2/d54b85ebfe416686b2558e2807bd5eef62d8c3964c35bbc22b4965093a7c57f4/diff:/var/lib/docker/overlay2/5cb3ff6e7921802d23fd6cba2490df2c36857ad3d97eedcf6e537d1527a37f60/diff:/var/lib/docker/overlay2/e9074b1b2294cc5a00d3237646a71c4d30d89748eadbae26dd532d6a475c9851/diff:/var/lib/docker/overlay2/df8929c4859ecf2c8807040b0174a3f0dfa8da01532ea961cd7f4f2f0ccfbfc2/diff:/var/lib/docker/overlay2/177329168f9a1043ce94a2b7b32958779e8f685e85abf8e4c335d03d23295b26/diff:/var/lib/docker/overlay2/9eef19133d2d5e0111523234481658c1f73d7552db59842ad9ca687debda8fb3/diff:/var/lib/docker/overlay2/5c3a4cdf915d1e65e3003feefb9b9e649b432658e093e04971d5cb79e610dba9/diff:/var/lib/docker/overlay2/d1aeff9fecf983db994270b4acf2a2310c790c64d42c58d436aec3238f88b36c/diff:/var/lib/docker/overlay2/580dc19745fb3c1352983857ac5a555954893045702e7867e651e15a2d6d8732/diff:/var/lib/docker/overlay2/6c77b3
2028ad3ef74c2faf89b5080529c0391907ee9c1d7bbc00c25a5340238b/diff:/var/lib/docker/overlay2/e9e20374d05015c7a4f19eb8e14e663122cd2aef66d761bedfdef138551230b8/diff:/var/lib/docker/overlay2/cd6d64933d189089a7aaf08d7c7db50b89cc981d03b8272e4fdedb65495c3e4c/diff:/var/lib/docker/overlay2/768244a14ae888bb2a08747182035c24bd2a6a7a67f1ddae7b4e60efccb01339/diff:/var/lib/docker/overlay2/f90a95060678580a1a90ee9a563996249f7606bffcd06b680ef15234b56ab4a6/diff:/var/lib/docker/overlay2/49310159b9be5ac9bfa1dca18d816535ceb12e228963adafcf71f9d7d52d95ec/diff:/var/lib/docker/overlay2/cef5c40c08cddf01fe5ea252f4a28586c0bd1296ddc38fc37fc7e34af83c3c30/diff:/var/lib/docker/overlay2/b31bdcb9b1a69c5230400a4e1f5650c1cc8f9e27e673ff43708f9a450106733e/diff:/var/lib/docker/overlay2/899516301b3ffc21aee7cb9984f97d944fe5af0f2cd0bc5a5895bf187e4bff72/diff:/var/lib/docker/overlay2/f1bd0d2fc2ce458c0b6364dde56610c3d26bf8dc2cca2ae4e34a46f173f44595/diff:/var/lib/docker/overlay2/9f0da14f9c19d5a719554996b34f3ef80a0378921181aaf89ca41339ccc5bee3/diff:/var/lib/d
ocker/overlay2/4d6e8c40f7c65cbcfa827b62273eb37d2a0bb0407ee161b80523f11c157a52d4/diff:/var/lib/docker/overlay2/434355bebe182fb5c97b07ad8cf7a59b5474f8bc71379ff4f9690ffe31837e63/diff:/var/lib/docker/overlay2/128fb199261eb4fea9e79111861223ddb0d84438454cb7c0b9a61cc73d0796c3/diff:/var/lib/docker/overlay2/11c9cb7e5f15ac43115cb739da1135a53e08dc94ee1da27441e1d6c79eb73e62/diff:/var/lib/docker/overlay2/5e6bf225209b025da3db5a2d8862c3a0ee64417fa809d026e010644fc6619b48/diff:/var/lib/docker/overlay2/265a2d12f86abb8a607a06ec6f225186dfe622f7e23cbbf146a2ceffc7bc9bdc/diff:/var/lib/docker/overlay2/e9140b8aea381db0faf833cd2d12ff40267febe63f7c6bc2fa27384dbadf89bc/diff:/var/lib/docker/overlay2/07d7d0ab38cb08c95279e0706a65a6a4aeff8b5e72d559f8bc3dfcc0895654d6/diff:/var/lib/docker/overlay2/3a3d8346fb066863283ad39cf7f43615d7a1e99dc12aa7e8892d85d1c550bc63/diff:/var/lib/docker/overlay2/e5317b212815c832ebd2451ddd6e1f6922f350c5c1651ccfef7c5a8f34b7c1e3/diff:/var/lib/docker/overlay2/3bd7d98f0f0bf7ad2e29344a6ab4ca3211616e390da35077e76565c2074
df852/diff:/var/lib/docker/overlay2/ff63d2045cce1bb51b201079bba9d2b2a666b7331699b9407801b2fc00792bb3/diff:/var/lib/docker/overlay2/545bdf3f77f20e3db68875eda0a8829facb605119a630956ab79323688a3562a/diff:/var/lib/docker/overlay2/8f9b40124ec5ada59184358938542e028ca1e234ef1f6bce1816becf6ec8354e/diff:/var/lib/docker/overlay2/71b919cd2ece4207294cb577adab54fdba87cd9f7fdca1a53d6f376f87f11151/diff:/var/lib/docker/overlay2/b2d4a34fe28bd395d9b4ba0120173fc9df90c36a8a60a309ec088eca8930768c/diff:/var/lib/docker/overlay2/e986f180cbc34391c1ea6665283859b4c3c3061156ea1ff7196ffd869741e591/diff:/var/lib/docker/overlay2/cb98df203d42ea558f11fe84300b687cb65e6f3401b377a4466c9c8ef276ec59/diff:/var/lib/docker/overlay2/6371f9c0528fb1fb4a17bfcf3a34877fdd6d43dca1b5f06aff998d43d2010c46/diff:/var/lib/docker/overlay2/79ceef8e89c4094715c05bafb8c1427d09fb4a99f71f1a6186299303af352c98/diff:/var/lib/docker/overlay2/e034e76eeaff4e68d5158ffe54b55d122124ad82998be242fa08bec3010a0e64/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f15eeb93498eb9b562fa0404f6a7af49ce3b2ca697ba1b4c09698c8f0f5924f2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f15eeb93498eb9b562fa0404f6a7af49ce3b2ca697ba1b4c09698c8f0f5924f2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f15eeb93498eb9b562fa0404f6a7af49ce3b2ca697ba1b4c09698c8f0f5924f2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220725131610-44543",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220725131610-44543/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220725131610-44543",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220725131610-44543",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220725131610-44543",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d84a1a595955080b294e46d4c0e514ca16b44447ef22b822c1bc5aa4576d787b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58933"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58934"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58935"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58936"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58937"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d84a1a595955",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220725131610-44543": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6935d4927a39",
	                        "old-k8s-version-20220725131610-44543"
	                    ],
	                    "NetworkID": "c2f2901f9a0d93fa66499c6332491a576318c2a7c67d4d75046d6eea022d9aab",
	                    "EndpointID": "43cf55334515d40188d52abea75fa535d217d7aa8b4c915012814925b60fae46",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220725131610-44543 -n old-k8s-version-20220725131610-44543
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220725131610-44543 -n old-k8s-version-20220725131610-44543: exit status 2 (449.787775ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-20220725131610-44543 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-20220725131610-44543 logs -n 25: (3.576934088s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |                 Profile                 |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p bridge-20220725125922-44543                    | bridge-20220725125922-44543             | jenkins | v1.26.0 | 25 Jul 22 13:15 PDT | 25 Jul 22 13:15 PDT |
	| start   | -p                                                | kubenet-20220725125922-44543            | jenkins | v1.26.0 | 25 Jul 22 13:15 PDT | 25 Jul 22 13:16 PDT |
	|         | kubenet-20220725125922-44543                      |                                         |         |         |                     |                     |
	|         | --memory=2048                                     |                                         |         |         |                     |                     |
	|         | --alsologtostderr                                 |                                         |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                         |         |         |                     |                     |
	|         | --network-plugin=kubenet                          |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	| delete  | -p                                                | enable-default-cni-20220725125922-44543 | jenkins | v1.26.0 | 25 Jul 22 13:16 PDT | 25 Jul 22 13:16 PDT |
	|         | enable-default-cni-20220725125922-44543           |                                         |         |         |                     |                     |
	| start   | -p                                                | old-k8s-version-20220725131610-44543    | jenkins | v1.26.0 | 25 Jul 22 13:16 PDT |                     |
	|         | old-k8s-version-20220725131610-44543              |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                         |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                         |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                         |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                         |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                         |         |         |                     |                     |
	| ssh     | -p                                                | kubenet-20220725125922-44543            | jenkins | v1.26.0 | 25 Jul 22 13:16 PDT | 25 Jul 22 13:16 PDT |
	|         | kubenet-20220725125922-44543                      |                                         |         |         |                     |                     |
	|         | pgrep -a kubelet                                  |                                         |         |         |                     |                     |
	| delete  | -p                                                | kubenet-20220725125922-44543            | jenkins | v1.26.0 | 25 Jul 22 13:17 PDT | 25 Jul 22 13:17 PDT |
	|         | kubenet-20220725125922-44543                      |                                         |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220725131741-44543         | jenkins | v1.26.0 | 25 Jul 22 13:17 PDT | 25 Jul 22 13:19 PDT |
	|         | no-preload-20220725131741-44543                   |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |                                         |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220725131741-44543         | jenkins | v1.26.0 | 25 Jul 22 13:19 PDT | 25 Jul 22 13:19 PDT |
	|         | no-preload-20220725131741-44543                   |                                         |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                         |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                         |         |         |                     |                     |
	| stop    | -p                                                | no-preload-20220725131741-44543         | jenkins | v1.26.0 | 25 Jul 22 13:19 PDT | 25 Jul 22 13:19 PDT |
	|         | no-preload-20220725131741-44543                   |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                         |         |         |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220725131741-44543         | jenkins | v1.26.0 | 25 Jul 22 13:19 PDT | 25 Jul 22 13:19 PDT |
	|         | no-preload-20220725131741-44543                   |                                         |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                         |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220725131741-44543         | jenkins | v1.26.0 | 25 Jul 22 13:19 PDT | 25 Jul 22 13:24 PDT |
	|         | no-preload-20220725131741-44543                   |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |                                         |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | old-k8s-version-20220725131610-44543    | jenkins | v1.26.0 | 25 Jul 22 13:20 PDT |                     |
	|         | old-k8s-version-20220725131610-44543              |                                         |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                         |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                         |         |         |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220725131610-44543    | jenkins | v1.26.0 | 25 Jul 22 13:21 PDT | 25 Jul 22 13:21 PDT |
	|         | old-k8s-version-20220725131610-44543              |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                         |         |         |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220725131610-44543    | jenkins | v1.26.0 | 25 Jul 22 13:21 PDT | 25 Jul 22 13:21 PDT |
	|         | old-k8s-version-20220725131610-44543              |                                         |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                         |         |         |                     |                     |
	| start   | -p                                                | old-k8s-version-20220725131610-44543    | jenkins | v1.26.0 | 25 Jul 22 13:21 PDT |                     |
	|         | old-k8s-version-20220725131610-44543              |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                         |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                         |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                         |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                         |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                         |         |         |                     |                     |
	| ssh     | -p                                                | no-preload-20220725131741-44543         | jenkins | v1.26.0 | 25 Jul 22 13:24 PDT | 25 Jul 22 13:24 PDT |
	|         | no-preload-20220725131741-44543                   |                                         |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                         |         |         |                     |                     |
	| pause   | -p                                                | no-preload-20220725131741-44543         | jenkins | v1.26.0 | 25 Jul 22 13:24 PDT | 25 Jul 22 13:24 PDT |
	|         | no-preload-20220725131741-44543                   |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                         |         |         |                     |                     |
	| unpause | -p                                                | no-preload-20220725131741-44543         | jenkins | v1.26.0 | 25 Jul 22 13:25 PDT | 25 Jul 22 13:25 PDT |
	|         | no-preload-20220725131741-44543                   |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                         |         |         |                     |                     |
	| delete  | -p                                                | no-preload-20220725131741-44543         | jenkins | v1.26.0 | 25 Jul 22 13:25 PDT | 25 Jul 22 13:25 PDT |
	|         | no-preload-20220725131741-44543                   |                                         |         |         |                     |                     |
	| delete  | -p                                                | no-preload-20220725131741-44543         | jenkins | v1.26.0 | 25 Jul 22 13:25 PDT | 25 Jul 22 13:25 PDT |
	|         | no-preload-20220725131741-44543                   |                                         |         |         |                     |                     |
	| start   | -p                                                | embed-certs-20220725132539-44543        | jenkins | v1.26.0 | 25 Jul 22 13:25 PDT | 25 Jul 22 13:26 PDT |
	|         | embed-certs-20220725132539-44543                  |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |                                         |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | embed-certs-20220725132539-44543        | jenkins | v1.26.0 | 25 Jul 22 13:26 PDT | 25 Jul 22 13:26 PDT |
	|         | embed-certs-20220725132539-44543                  |                                         |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                         |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                         |         |         |                     |                     |
	| stop    | -p                                                | embed-certs-20220725132539-44543        | jenkins | v1.26.0 | 25 Jul 22 13:26 PDT | 25 Jul 22 13:26 PDT |
	|         | embed-certs-20220725132539-44543                  |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                         |         |         |                     |                     |
	| addons  | enable dashboard -p                               | embed-certs-20220725132539-44543        | jenkins | v1.26.0 | 25 Jul 22 13:26 PDT | 25 Jul 22 13:26 PDT |
	|         | embed-certs-20220725132539-44543                  |                                         |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                         |         |         |                     |                     |
	| start   | -p                                                | embed-certs-20220725132539-44543        | jenkins | v1.26.0 | 25 Jul 22 13:26 PDT |                     |
	|         | embed-certs-20220725132539-44543                  |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |                                         |         |         |                     |                     |
	|---------|---------------------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/25 13:26:48
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 13:26:48.547427   60896 out.go:296] Setting OutFile to fd 1 ...
	I0725 13:26:48.547663   60896 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 13:26:48.547668   60896 out.go:309] Setting ErrFile to fd 2...
	I0725 13:26:48.547672   60896 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 13:26:48.547782   60896 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/bin
	I0725 13:26:48.548312   60896 out.go:303] Setting JSON to false
	I0725 13:26:48.563654   60896 start.go:115] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":15980,"bootTime":1658764828,"procs":355,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0725 13:26:48.563800   60896 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0725 13:26:48.585799   60896 out.go:177] * [embed-certs-20220725132539-44543] minikube v1.26.0 on Darwin 12.4
	I0725 13:26:48.627711   60896 notify.go:193] Checking for updates...
	I0725 13:26:48.648811   60896 out.go:177]   - MINIKUBE_LOCATION=14555
	I0725 13:26:48.669719   60896 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	I0725 13:26:48.690638   60896 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 13:26:48.712084   60896 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 13:26:48.734191   60896 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube
	I0725 13:26:48.756550   60896 config.go:178] Loaded profile config "embed-certs-20220725132539-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0725 13:26:48.757217   60896 driver.go:365] Setting default libvirt URI to qemu:///system
	I0725 13:26:48.825997   60896 docker.go:137] docker version: linux-20.10.17
	I0725 13:26:48.826132   60896 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 13:26:48.960295   60896 info.go:265] docker info: {ID:DEPZ:X5EQ:JUVI:7TAY:IXPP:M3QT:KGJK:NK3N:YSWM:HAHS:YLUF:RBE6 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 20:26:48.899440621 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 13:26:49.004155   60896 out.go:177] * Using the docker driver based on existing profile
	I0725 13:26:49.025981   60896 start.go:284] selected driver: docker
	I0725 13:26:49.026017   60896 start.go:808] validating driver "docker" against &{Name:embed-certs-20220725132539-44543 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:embed-certs-20220725132539-44543 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s Schedu
ledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 13:26:49.026150   60896 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 13:26:49.029491   60896 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 13:26:49.162003   60896 info.go:265] docker info: {ID:DEPZ:X5EQ:JUVI:7TAY:IXPP:M3QT:KGJK:NK3N:YSWM:HAHS:YLUF:RBE6 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 20:26:49.103146968 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 13:26:49.162174   60896 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 13:26:49.162190   60896 cni.go:95] Creating CNI manager for ""
	I0725 13:26:49.162199   60896 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 13:26:49.162223   60896 start_flags.go:310] config:
	{Name:embed-certs-20220725132539-44543 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:embed-certs-20220725132539-44543 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cl
uster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 13:26:49.205870   60896 out.go:177] * Starting control plane node embed-certs-20220725132539-44543 in cluster embed-certs-20220725132539-44543
	I0725 13:26:49.226856   60896 cache.go:120] Beginning downloading kic base image for docker with docker
	I0725 13:26:49.249040   60896 out.go:177] * Pulling base image ...
	I0725 13:26:49.291616   60896 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0725 13:26:49.291652   60896 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0725 13:26:49.291684   60896 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4
	I0725 13:26:49.291697   60896 cache.go:57] Caching tarball of preloaded images
	I0725 13:26:49.291833   60896 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0725 13:26:49.291855   60896 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.2 on docker
	I0725 13:26:49.292505   60896 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/embed-certs-20220725132539-44543/config.json ...
	I0725 13:26:49.355938   60896 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0725 13:26:49.355966   60896 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0725 13:26:49.355978   60896 cache.go:208] Successfully downloaded all kic artifacts
	I0725 13:26:49.356021   60896 start.go:370] acquiring machines lock for embed-certs-20220725132539-44543: {Name:mkedcda8c6ffd244a6eb5ea62b1d8110eb07449c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 13:26:49.356105   60896 start.go:374] acquired machines lock for "embed-certs-20220725132539-44543" in 59.916µs
	I0725 13:26:49.356125   60896 start.go:95] Skipping create...Using existing machine configuration
	I0725 13:26:49.356136   60896 fix.go:55] fixHost starting: 
	I0725 13:26:49.356360   60896 cli_runner.go:164] Run: docker container inspect embed-certs-20220725132539-44543 --format={{.State.Status}}
	I0725 13:26:49.424125   60896 fix.go:103] recreateIfNeeded on embed-certs-20220725132539-44543: state=Stopped err=<nil>
	W0725 13:26:49.424176   60896 fix.go:129] unexpected machine state, will restart: <nil>
	I0725 13:26:49.446453   60896 out.go:177] * Restarting existing docker container for "embed-certs-20220725132539-44543" ...
	I0725 13:26:49.468132   60896 cli_runner.go:164] Run: docker start embed-certs-20220725132539-44543
	I0725 13:26:49.813394   60896 cli_runner.go:164] Run: docker container inspect embed-certs-20220725132539-44543 --format={{.State.Status}}
	I0725 13:26:49.886745   60896 kic.go:415] container "embed-certs-20220725132539-44543" state is running.
	I0725 13:26:49.887403   60896 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220725132539-44543
	I0725 13:26:49.963095   60896 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/embed-certs-20220725132539-44543/config.json ...
	I0725 13:26:49.963502   60896 machine.go:88] provisioning docker machine ...
	I0725 13:26:49.963527   60896 ubuntu.go:169] provisioning hostname "embed-certs-20220725132539-44543"
	I0725 13:26:49.963596   60896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725132539-44543
	I0725 13:26:50.039063   60896 main.go:134] libmachine: Using SSH client type: native
	I0725 13:26:50.039288   60896 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 59422 <nil> <nil>}
	I0725 13:26:50.039301   60896 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220725132539-44543 && echo "embed-certs-20220725132539-44543" | sudo tee /etc/hostname
	I0725 13:26:50.170431   60896 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220725132539-44543
	
	I0725 13:26:50.170514   60896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725132539-44543
	I0725 13:26:50.246235   60896 main.go:134] libmachine: Using SSH client type: native
	I0725 13:26:50.246398   60896 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 59422 <nil> <nil>}
	I0725 13:26:50.246415   60896 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220725132539-44543' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220725132539-44543/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220725132539-44543' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 13:26:50.365664   60896 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 13:26:50.365688   60896 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube}
	I0725 13:26:50.365709   60896 ubuntu.go:177] setting up certificates
	I0725 13:26:50.365719   60896 provision.go:83] configureAuth start
	I0725 13:26:50.365796   60896 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220725132539-44543
	I0725 13:26:50.440349   60896 provision.go:138] copyHostCerts
	I0725 13:26:50.440475   60896 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem, removing ...
	I0725 13:26:50.440485   60896 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem
	I0725 13:26:50.440587   60896 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem (1082 bytes)
	I0725 13:26:50.440815   60896 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem, removing ...
	I0725 13:26:50.440830   60896 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem
	I0725 13:26:50.440890   60896 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem (1123 bytes)
	I0725 13:26:50.441056   60896 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem, removing ...
	I0725 13:26:50.441062   60896 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem
	I0725 13:26:50.441120   60896 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem (1675 bytes)
	I0725 13:26:50.441275   60896 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220725132539-44543 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220725132539-44543]
	I0725 13:26:50.557687   60896 provision.go:172] copyRemoteCerts
	I0725 13:26:50.557751   60896 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 13:26:50.557825   60896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725132539-44543
	I0725 13:26:50.629344   60896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59422 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/embed-certs-20220725132539-44543/id_rsa Username:docker}
	I0725 13:26:50.718627   60896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0725 13:26:50.735715   60896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0725 13:26:50.751806   60896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0725 13:26:50.768118   60896 provision.go:86] duration metric: configureAuth took 402.373037ms
	I0725 13:26:50.768132   60896 ubuntu.go:193] setting minikube options for container-runtime
	I0725 13:26:50.768266   60896 config.go:178] Loaded profile config "embed-certs-20220725132539-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0725 13:26:50.768315   60896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725132539-44543
	I0725 13:26:50.840378   60896 main.go:134] libmachine: Using SSH client type: native
	I0725 13:26:50.840536   60896 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 59422 <nil> <nil>}
	I0725 13:26:50.840548   60896 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 13:26:50.965802   60896 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0725 13:26:50.965819   60896 ubuntu.go:71] root file system type: overlay
	I0725 13:26:50.966002   60896 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 13:26:50.966080   60896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725132539-44543
	I0725 13:26:51.036849   60896 main.go:134] libmachine: Using SSH client type: native
	I0725 13:26:51.036995   60896 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 59422 <nil> <nil>}
	I0725 13:26:51.037043   60896 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 13:26:51.167067   60896 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 13:26:51.167151   60896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725132539-44543
	I0725 13:26:51.237871   60896 main.go:134] libmachine: Using SSH client type: native
	I0725 13:26:51.238049   60896 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 59422 <nil> <nil>}
	I0725 13:26:51.238062   60896 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 13:26:51.363554   60896 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 13:26:51.363567   60896 machine.go:91] provisioned docker machine in 1.400015538s
	I0725 13:26:51.363577   60896 start.go:307] post-start starting for "embed-certs-20220725132539-44543" (driver="docker")
	I0725 13:26:51.363582   60896 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 13:26:51.363643   60896 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 13:26:51.363691   60896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725132539-44543
	I0725 13:26:51.437205   60896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59422 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/embed-certs-20220725132539-44543/id_rsa Username:docker}
	I0725 13:26:51.527183   60896 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 13:26:51.530742   60896 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0725 13:26:51.530759   60896 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0725 13:26:51.530765   60896 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0725 13:26:51.530770   60896 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0725 13:26:51.530783   60896 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/addons for local assets ...
	I0725 13:26:51.530909   60896 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files for local assets ...
	I0725 13:26:51.531049   60896 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem -> 445432.pem in /etc/ssl/certs
	I0725 13:26:51.531209   60896 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 13:26:51.538152   60896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem --> /etc/ssl/certs/445432.pem (1708 bytes)
	I0725 13:26:51.555946   60896 start.go:310] post-start completed in 192.354602ms
	I0725 13:26:51.556040   60896 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 13:26:51.556105   60896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725132539-44543
	I0725 13:26:51.627974   60896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59422 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/embed-certs-20220725132539-44543/id_rsa Username:docker}
	I0725 13:26:51.714198   60896 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0725 13:26:51.718597   60896 fix.go:57] fixHost completed within 2.362392942s
	I0725 13:26:51.718610   60896 start.go:82] releasing machines lock for "embed-certs-20220725132539-44543", held for 2.362429297s
	I0725 13:26:51.718700   60896 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220725132539-44543
	I0725 13:26:51.789671   60896 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0725 13:26:51.789678   60896 ssh_runner.go:195] Run: systemctl --version
	I0725 13:26:51.789751   60896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725132539-44543
	I0725 13:26:51.789757   60896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725132539-44543
	I0725 13:26:51.866411   60896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59422 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/embed-certs-20220725132539-44543/id_rsa Username:docker}
	I0725 13:26:51.867863   60896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59422 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/embed-certs-20220725132539-44543/id_rsa Username:docker}
	I0725 13:26:52.170482   60896 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 13:26:52.180365   60896 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0725 13:26:52.180423   60896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 13:26:52.191899   60896 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 13:26:52.204163   60896 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 13:26:52.271546   60896 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 13:26:52.341953   60896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 13:26:52.404515   60896 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 13:26:52.623120   60896 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0725 13:26:52.693995   60896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 13:26:52.758221   60896 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0725 13:26:52.767593   60896 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0725 13:26:52.767655   60896 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0725 13:26:52.771384   60896 start.go:471] Will wait 60s for crictl version
	I0725 13:26:52.771432   60896 ssh_runner.go:195] Run: sudo crictl version
	I0725 13:26:52.874937   60896 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0725 13:26:52.875000   60896 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 13:26:52.909296   60896 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 13:26:52.986216   60896 out.go:204] * Preparing Kubernetes v1.24.2 on Docker 20.10.17 ...
	I0725 13:26:52.986400   60896 cli_runner.go:164] Run: docker exec -t embed-certs-20220725132539-44543 dig +short host.docker.internal
	I0725 13:26:53.115923   60896 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0725 13:26:53.116029   60896 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0725 13:26:53.121448   60896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 13:26:53.131166   60896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220725132539-44543
	I0725 13:26:53.203036   60896 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0725 13:26:53.203111   60896 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 13:26:53.232252   60896 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.2
	k8s.gcr.io/kube-proxy:v1.24.2
	k8s.gcr.io/kube-scheduler:v1.24.2
	k8s.gcr.io/kube-controller-manager:v1.24.2
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0725 13:26:53.232269   60896 docker.go:542] Images already preloaded, skipping extraction
	I0725 13:26:53.232348   60896 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 13:26:53.262252   60896 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.2
	k8s.gcr.io/kube-scheduler:v1.24.2
	k8s.gcr.io/kube-controller-manager:v1.24.2
	k8s.gcr.io/kube-proxy:v1.24.2
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0725 13:26:53.262272   60896 cache_images.go:84] Images are preloaded, skipping loading
	I0725 13:26:53.262351   60896 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0725 13:26:53.333671   60896 cni.go:95] Creating CNI manager for ""
	I0725 13:26:53.333682   60896 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 13:26:53.333696   60896 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0725 13:26:53.333709   60896 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220725132539-44543 NodeName:embed-certs-20220725132539-44543 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile
:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0725 13:26:53.333811   60896 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "embed-certs-20220725132539-44543"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 13:26:53.333903   60896 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=embed-certs-20220725132539-44543 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.2 ClusterName:embed-certs-20220725132539-44543 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0725 13:26:53.333962   60896 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
	I0725 13:26:53.341316   60896 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 13:26:53.341375   60896 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 13:26:53.348729   60896 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (494 bytes)
	I0725 13:26:53.360708   60896 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 13:26:53.372857   60896 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2054 bytes)
	I0725 13:26:53.385380   60896 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0725 13:26:53.388890   60896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 13:26:53.398360   60896 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/embed-certs-20220725132539-44543 for IP: 192.168.76.2
	I0725 13:26:53.398470   60896 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.key
	I0725 13:26:53.398520   60896 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.key
	I0725 13:26:53.398593   60896 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/embed-certs-20220725132539-44543/client.key
	I0725 13:26:53.398650   60896 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/embed-certs-20220725132539-44543/apiserver.key.31bdca25
	I0725 13:26:53.398698   60896 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/embed-certs-20220725132539-44543/proxy-client.key
	I0725 13:26:53.398918   60896 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543.pem (1338 bytes)
	W0725 13:26:53.398960   60896 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543_empty.pem, impossibly tiny 0 bytes
	I0725 13:26:53.398971   60896 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 13:26:53.399004   60896 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem (1082 bytes)
	I0725 13:26:53.399033   60896 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem (1123 bytes)
	I0725 13:26:53.399058   60896 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem (1675 bytes)
	I0725 13:26:53.399119   60896 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem (1708 bytes)
	I0725 13:26:53.399636   60896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/embed-certs-20220725132539-44543/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0725 13:26:53.416223   60896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/embed-certs-20220725132539-44543/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 13:26:53.432572   60896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/embed-certs-20220725132539-44543/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 13:26:53.449196   60896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/embed-certs-20220725132539-44543/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 13:26:53.465993   60896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 13:26:53.482339   60896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 13:26:53.498714   60896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 13:26:53.515036   60896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0725 13:26:53.531395   60896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543.pem --> /usr/share/ca-certificates/44543.pem (1338 bytes)
	I0725 13:26:53.547950   60896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem --> /usr/share/ca-certificates/445432.pem (1708 bytes)
	I0725 13:26:53.587127   60896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 13:26:53.603886   60896 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 13:26:53.616328   60896 ssh_runner.go:195] Run: openssl version
	I0725 13:26:53.621375   60896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44543.pem && ln -fs /usr/share/ca-certificates/44543.pem /etc/ssl/certs/44543.pem"
	I0725 13:26:53.628836   60896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44543.pem
	I0725 13:26:53.632532   60896 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 25 19:24 /usr/share/ca-certificates/44543.pem
	I0725 13:26:53.632580   60896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44543.pem
	I0725 13:26:53.637683   60896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/44543.pem /etc/ssl/certs/51391683.0"
	I0725 13:26:53.644581   60896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/445432.pem && ln -fs /usr/share/ca-certificates/445432.pem /etc/ssl/certs/445432.pem"
	I0725 13:26:53.652216   60896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/445432.pem
	I0725 13:26:53.655971   60896 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 25 19:24 /usr/share/ca-certificates/445432.pem
	I0725 13:26:53.656010   60896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/445432.pem
	I0725 13:26:53.661284   60896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/445432.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 13:26:53.668359   60896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 13:26:53.676006   60896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 13:26:53.679917   60896 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 25 19:19 /usr/share/ca-certificates/minikubeCA.pem
	I0725 13:26:53.680017   60896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 13:26:53.685793   60896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 13:26:53.692646   60896 kubeadm.go:395] StartCluster: {Name:embed-certs-20220725132539-44543 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:embed-certs-20220725132539-44543 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expose
dPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 13:26:53.692743   60896 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 13:26:53.721978   60896 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 13:26:53.729370   60896 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0725 13:26:53.729381   60896 kubeadm.go:626] restartCluster start
	I0725 13:26:53.729418   60896 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 13:26:53.736072   60896 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:53.736123   60896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220725132539-44543
	I0725 13:26:53.808101   60896 kubeconfig.go:116] verify returned: extract IP: "embed-certs-20220725132539-44543" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	I0725 13:26:53.808262   60896 kubeconfig.go:127] "embed-certs-20220725132539-44543" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig - will repair!
	I0725 13:26:53.808621   60896 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig: {Name:mk33e723b045d7cbb2c524ed31a7e78a4a0b6415 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 13:26:53.809797   60896 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 13:26:53.817648   60896 api_server.go:165] Checking apiserver status ...
	I0725 13:26:53.817713   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:26:53.826733   60896 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:54.027462   60896 api_server.go:165] Checking apiserver status ...
	I0725 13:26:54.027716   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:26:54.038403   60896 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:54.227398   60896 api_server.go:165] Checking apiserver status ...
	I0725 13:26:54.227644   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:26:54.238236   60896 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:54.427576   60896 api_server.go:165] Checking apiserver status ...
	I0725 13:26:54.427755   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:26:54.438756   60896 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:54.627358   60896 api_server.go:165] Checking apiserver status ...
	I0725 13:26:54.627497   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:26:54.636422   60896 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:54.827394   60896 api_server.go:165] Checking apiserver status ...
	I0725 13:26:54.827487   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:26:54.838488   60896 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:55.026933   60896 api_server.go:165] Checking apiserver status ...
	I0725 13:26:55.027049   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:26:55.037485   60896 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:55.226880   60896 api_server.go:165] Checking apiserver status ...
	I0725 13:26:55.226957   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:26:55.235857   60896 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:55.427840   60896 api_server.go:165] Checking apiserver status ...
	I0725 13:26:55.428001   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:26:55.438429   60896 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:55.628967   60896 api_server.go:165] Checking apiserver status ...
	I0725 13:26:55.629079   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:26:55.639603   60896 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:55.828963   60896 api_server.go:165] Checking apiserver status ...
	I0725 13:26:55.829161   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:26:55.839558   60896 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:56.027028   60896 api_server.go:165] Checking apiserver status ...
	I0725 13:26:56.027119   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:26:56.037616   60896 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:56.229013   60896 api_server.go:165] Checking apiserver status ...
	I0725 13:26:56.229229   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:26:56.239416   60896 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:56.427054   60896 api_server.go:165] Checking apiserver status ...
	I0725 13:26:56.427246   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:26:56.437328   60896 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:56.628631   60896 api_server.go:165] Checking apiserver status ...
	I0725 13:26:56.628739   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:26:56.639033   60896 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:56.826934   60896 api_server.go:165] Checking apiserver status ...
	I0725 13:26:56.826996   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:26:56.835856   60896 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:56.835867   60896 api_server.go:165] Checking apiserver status ...
	I0725 13:26:56.835926   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:26:56.844515   60896 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:56.844534   60896 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0725 13:26:56.844542   60896 kubeadm.go:1092] stopping kube-system containers ...
	I0725 13:26:56.844600   60896 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 13:26:56.878986   60896 docker.go:443] Stopping containers: [c3d79be36829 1c51316a3481 68bdaf34cc2f 7c6f3ac7c5f3 79f7459bb476 b7549c872bf4 ff7140875b14 b8bc65908490 2261dd283394 99e1f7baa7d0 a2c3192c3c39 f358469cafac 4eba7ec75371 e84371b0922e 3ff0cb9c7d63 22853dac1834]
	I0725 13:26:56.879060   60896 ssh_runner.go:195] Run: docker stop c3d79be36829 1c51316a3481 68bdaf34cc2f 7c6f3ac7c5f3 79f7459bb476 b7549c872bf4 ff7140875b14 b8bc65908490 2261dd283394 99e1f7baa7d0 a2c3192c3c39 f358469cafac 4eba7ec75371 e84371b0922e 3ff0cb9c7d63 22853dac1834
	I0725 13:26:56.908333   60896 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 13:26:56.918203   60896 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 13:26:56.925547   60896 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jul 25 20:25 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jul 25 20:25 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2067 Jul 25 20:26 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jul 25 20:25 /etc/kubernetes/scheduler.conf
	
	I0725 13:26:56.925599   60896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 13:26:56.932577   60896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 13:26:56.939336   60896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 13:26:56.946087   60896 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:56.946134   60896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 13:26:56.952735   60896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 13:26:56.959517   60896 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:56.959565   60896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 13:26:56.965970   60896 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 13:26:56.972930   60896 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0725 13:26:56.972940   60896 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:26:57.017926   60896 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:26:57.767698   60896 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:26:57.943236   60896 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:26:57.991345   60896 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:26:58.050211   60896 api_server.go:51] waiting for apiserver process to appear ...
	I0725 13:26:58.050286   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:26:58.582245   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:26:59.082479   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:26:59.094188   60896 api_server.go:71] duration metric: took 1.043948998s to wait for apiserver process to appear ...
	I0725 13:26:59.094209   60896 api_server.go:87] waiting for apiserver healthz status ...
	I0725 13:26:59.094231   60896 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59426/healthz ...
	I0725 13:26:59.095509   60896 api_server.go:256] stopped: https://127.0.0.1:59426/healthz: Get "https://127.0.0.1:59426/healthz": EOF
	I0725 13:26:59.596003   60896 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59426/healthz ...
	I0725 13:27:02.411623   60896 api_server.go:266] https://127.0.0.1:59426/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0725 13:27:02.411651   60896 api_server.go:102] status: https://127.0.0.1:59426/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0725 13:27:02.597905   60896 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59426/healthz ...
	I0725 13:27:02.607032   60896 api_server.go:266] https://127.0.0.1:59426/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 13:27:02.607059   60896 api_server.go:102] status: https://127.0.0.1:59426/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 13:27:03.095805   60896 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59426/healthz ...
	I0725 13:27:03.102036   60896 api_server.go:266] https://127.0.0.1:59426/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 13:27:03.102057   60896 api_server.go:102] status: https://127.0.0.1:59426/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 13:27:03.595754   60896 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59426/healthz ...
	I0725 13:27:03.601497   60896 api_server.go:266] https://127.0.0.1:59426/healthz returned 200:
	ok
	I0725 13:27:03.613887   60896 api_server.go:140] control plane version: v1.24.2
	I0725 13:27:03.613902   60896 api_server.go:130] duration metric: took 4.51955617s to wait for apiserver health ...
	I0725 13:27:03.613908   60896 cni.go:95] Creating CNI manager for ""
	I0725 13:27:03.613912   60896 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 13:27:03.613920   60896 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 13:27:03.621732   60896 system_pods.go:59] 8 kube-system pods found
	I0725 13:27:03.621746   60896 system_pods.go:61] "coredns-6d4b75cb6d-htpr6" [ea0b0f7f-8b0a-4385-b505-e3122fe524b0] Running
	I0725 13:27:03.621754   60896 system_pods.go:61] "etcd-embed-certs-20220725132539-44543" [9d01d9cf-2802-46d5-8ca1-7a4e6c619232] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0725 13:27:03.621759   60896 system_pods.go:61] "kube-apiserver-embed-certs-20220725132539-44543" [89aebf00-48c5-4d71-b8c4-ad3faade9c36] Running
	I0725 13:27:03.621763   60896 system_pods.go:61] "kube-controller-manager-embed-certs-20220725132539-44543" [b27f6cdf-dc9b-4c22-820a-434d64ff35d1] Running
	I0725 13:27:03.621767   60896 system_pods.go:61] "kube-proxy-7pjkq" [7e1ad46c-cdbd-4109-956b-3250bf6a1a8e] Running
	I0725 13:27:03.621772   60896 system_pods.go:61] "kube-scheduler-embed-certs-20220725132539-44543" [946e68be-c055-4c90-bd5d-31c53b3534a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 13:27:03.621779   60896 system_pods.go:61] "metrics-server-5c6f97fb75-4xt92" [705f970d-49d5-4a4c-9e18-6da6f236cff5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 13:27:03.621783   60896 system_pods.go:61] "storage-provisioner" [4b92166d-6e5a-4692-b6e4-4269d858e8c3] Running
	I0725 13:27:03.621787   60896 system_pods.go:74] duration metric: took 7.862224ms to wait for pod list to return data ...
	I0725 13:27:03.621793   60896 node_conditions.go:102] verifying NodePressure condition ...
	I0725 13:27:03.624429   60896 node_conditions.go:122] node storage ephemeral capacity is 107077304Ki
	I0725 13:27:03.624445   60896 node_conditions.go:123] node cpu capacity is 6
	I0725 13:27:03.624453   60896 node_conditions.go:105] duration metric: took 2.656612ms to run NodePressure ...
	I0725 13:27:03.624470   60896 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:27:03.765029   60896 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0725 13:27:03.769671   60896 kubeadm.go:777] kubelet initialised
	I0725 13:27:03.769686   60896 kubeadm.go:778] duration metric: took 4.63572ms waiting for restarted kubelet to initialise ...
	I0725 13:27:03.769694   60896 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 13:27:03.774718   60896 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-htpr6" in "kube-system" namespace to be "Ready" ...
	I0725 13:27:03.779434   60896 pod_ready.go:92] pod "coredns-6d4b75cb6d-htpr6" in "kube-system" namespace has status "Ready":"True"
	I0725 13:27:03.779442   60896 pod_ready.go:81] duration metric: took 4.711352ms waiting for pod "coredns-6d4b75cb6d-htpr6" in "kube-system" namespace to be "Ready" ...
	I0725 13:27:03.779448   60896 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-20220725132539-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:27:05.795546   60896 pod_ready.go:102] pod "etcd-embed-certs-20220725132539-44543" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:07.796661   60896 pod_ready.go:102] pod "etcd-embed-certs-20220725132539-44543" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:09.797132   60896 pod_ready.go:102] pod "etcd-embed-certs-20220725132539-44543" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:11.797247   60896 pod_ready.go:102] pod "etcd-embed-certs-20220725132539-44543" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:13.797607   60896 pod_ready.go:92] pod "etcd-embed-certs-20220725132539-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:27:13.797620   60896 pod_ready.go:81] duration metric: took 10.017876261s waiting for pod "etcd-embed-certs-20220725132539-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:27:13.797626   60896 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-20220725132539-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:27:13.801581   60896 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220725132539-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:27:13.801589   60896 pod_ready.go:81] duration metric: took 3.958491ms waiting for pod "kube-apiserver-embed-certs-20220725132539-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:27:13.801594   60896 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-20220725132539-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:27:15.814130   60896 pod_ready.go:102] pod "kube-controller-manager-embed-certs-20220725132539-44543" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:18.313101   60896 pod_ready.go:102] pod "kube-controller-manager-embed-certs-20220725132539-44543" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:18.812898   60896 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220725132539-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:27:18.812911   60896 pod_ready.go:81] duration metric: took 5.011165723s waiting for pod "kube-controller-manager-embed-certs-20220725132539-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:27:18.812917   60896 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7pjkq" in "kube-system" namespace to be "Ready" ...
	I0725 13:27:18.816934   60896 pod_ready.go:92] pod "kube-proxy-7pjkq" in "kube-system" namespace has status "Ready":"True"
	I0725 13:27:18.816941   60896 pod_ready.go:81] duration metric: took 4.020031ms waiting for pod "kube-proxy-7pjkq" in "kube-system" namespace to be "Ready" ...
	I0725 13:27:18.816946   60896 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-20220725132539-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:27:18.820860   60896 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220725132539-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:27:18.820867   60896 pod_ready.go:81] duration metric: took 3.91141ms waiting for pod "kube-scheduler-embed-certs-20220725132539-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:27:18.820873   60896 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace to be "Ready" ...
	I0725 13:27:20.830973   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:22.831222   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:25.330338   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:27.331801   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:29.333198   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:31.833556   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:34.331073   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:36.332608   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:38.333354   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:40.832521   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:42.833818   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:45.331089   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:47.334374   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:49.832653   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:51.834727   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:54.334928   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:56.832318   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:00.078228   60183 out.go:204]   - Generating certificates and keys ...
	I0725 13:28:00.141595   60183 out.go:204]   - Booting up control plane ...
	W0725 13:28:00.145489   60183 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0725 13:28:00.145526   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0725 13:28:00.568444   60183 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 13:28:00.578598   60183 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0725 13:28:00.578655   60183 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 13:28:00.586062   60183 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 13:28:00.586084   60183 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0725 13:28:01.303869   60183 out.go:204]   - Generating certificates and keys ...
	I0725 13:27:58.834263   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:01.331255   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:03.332422   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:02.819781   60183 out.go:204]   - Booting up control plane ...
	I0725 13:28:05.334830   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:07.335519   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:09.834439   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:11.835591   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:14.334337   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:16.335031   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:18.835705   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:21.333617   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:23.334016   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:25.833249   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:27.835336   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:30.334081   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:32.335851   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:34.833220   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:36.833578   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:39.336254   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:41.835840   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:44.334110   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:46.833528   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:48.836513   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:50.836788   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:53.336552   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:55.835048   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:57.835550   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:00.336892   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:02.836866   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:05.336690   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:07.835667   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:09.837186   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:12.335226   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:14.335725   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:16.336690   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:18.836768   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:21.337269   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:23.837151   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:26.335027   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:28.338729   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:30.837640   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:33.337517   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:35.835459   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:37.836627   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:40.335900   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:42.337529   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:44.337705   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:46.836842   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:49.337159   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:51.838140   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:54.338382   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:56.837922   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:57.738590   60183 kubeadm.go:397] StartCluster complete in 7m59.250327249s
	I0725 13:29:57.738667   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:29:57.767241   60183 logs.go:274] 0 containers: []
	W0725 13:29:57.767253   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:29:57.767311   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:29:57.795435   60183 logs.go:274] 0 containers: []
	W0725 13:29:57.795448   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:29:57.795503   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:29:57.824559   60183 logs.go:274] 0 containers: []
	W0725 13:29:57.824581   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:29:57.824642   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:29:57.854900   60183 logs.go:274] 0 containers: []
	W0725 13:29:57.854912   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:29:57.854967   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:29:57.883684   60183 logs.go:274] 0 containers: []
	W0725 13:29:57.883695   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:29:57.883747   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:29:57.917022   60183 logs.go:274] 0 containers: []
	W0725 13:29:57.917034   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:29:57.917091   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:29:57.948784   60183 logs.go:274] 0 containers: []
	W0725 13:29:57.948800   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:29:57.948858   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:29:57.982242   60183 logs.go:274] 0 containers: []
	W0725 13:29:57.982254   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:29:57.982261   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:29:57.982268   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:30:00.039435   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.0570932s)
	I0725 13:30:00.039559   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:30:00.039566   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:30:00.078912   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:30:00.078928   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:30:00.090262   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:30:00.090278   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:30:00.144105   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:30:00.144118   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:30:00.144124   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	W0725 13:30:00.158368   60183 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0725 13:30:00.158385   60183 out.go:239] * 
	W0725 13:30:00.158482   60183 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0725 13:30:00.158497   60183 out.go:239] * 
	W0725 13:30:00.159027   60183 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 13:30:00.221624   60183 out.go:177] 
	W0725 13:30:00.264001   60183 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0725 13:30:00.264127   60183 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0725 13:30:00.264205   60183 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0725 13:30:00.327781   60183 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Logs begin at Mon 2022-07-25 20:21:55 UTC, end at Mon 2022-07-25 20:30:01 UTC. --
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[129]: time="2022-07-25T20:21:57.451853324Z" level=info msg="Processing signal 'terminated'"
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[129]: time="2022-07-25T20:21:57.452788005Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[129]: time="2022-07-25T20:21:57.453258112Z" level=info msg="Daemon shutdown complete"
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[129]: time="2022-07-25T20:21:57.453320986Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 systemd[1]: docker.service: Succeeded.
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 systemd[1]: Stopped Docker Application Container Engine.
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 systemd[1]: Starting Docker Application Container Engine...
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[424]: time="2022-07-25T20:21:57.506263841Z" level=info msg="Starting up"
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[424]: time="2022-07-25T20:21:57.508857550Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[424]: time="2022-07-25T20:21:57.508891909Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[424]: time="2022-07-25T20:21:57.508909432Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[424]: time="2022-07-25T20:21:57.508917186Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[424]: time="2022-07-25T20:21:57.509870019Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[424]: time="2022-07-25T20:21:57.509899398Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[424]: time="2022-07-25T20:21:57.509912393Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[424]: time="2022-07-25T20:21:57.509918763Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[424]: time="2022-07-25T20:21:57.513919873Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[424]: time="2022-07-25T20:21:57.517902418Z" level=info msg="Loading containers: start."
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[424]: time="2022-07-25T20:21:57.592180966Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[424]: time="2022-07-25T20:21:57.621348334Z" level=info msg="Loading containers: done."
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[424]: time="2022-07-25T20:21:57.629449532Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[424]: time="2022-07-25T20:21:57.629505415Z" level=info msg="Daemon has completed initialization"
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 systemd[1]: Started Docker Application Container Engine.
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[424]: time="2022-07-25T20:21:57.651604471Z" level=info msg="API listen on [::]:2376"
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[424]: time="2022-07-25T20:21:57.655414726Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2022-07-25T20:30:04Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  20:30:04 up  1:11,  0 users,  load average: 0.75, 1.10, 1.25
	Linux old-k8s-version-20220725131610-44543 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2022-07-25 20:21:55 UTC, end at Mon 2022-07-25 20:30:04 UTC. --
	Jul 25 20:30:02 old-k8s-version-20220725131610-44543 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 25 20:30:03 old-k8s-version-20220725131610-44543 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 162.
	Jul 25 20:30:03 old-k8s-version-20220725131610-44543 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 25 20:30:03 old-k8s-version-20220725131610-44543 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 25 20:30:03 old-k8s-version-20220725131610-44543 kubelet[14440]: I0725 20:30:03.239525   14440 server.go:410] Version: v1.16.0
	Jul 25 20:30:03 old-k8s-version-20220725131610-44543 kubelet[14440]: I0725 20:30:03.240156   14440 plugins.go:100] No cloud provider specified.
	Jul 25 20:30:03 old-k8s-version-20220725131610-44543 kubelet[14440]: I0725 20:30:03.240212   14440 server.go:773] Client rotation is on, will bootstrap in background
	Jul 25 20:30:03 old-k8s-version-20220725131610-44543 kubelet[14440]: I0725 20:30:03.242147   14440 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 25 20:30:03 old-k8s-version-20220725131610-44543 kubelet[14440]: W0725 20:30:03.242861   14440 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jul 25 20:30:03 old-k8s-version-20220725131610-44543 kubelet[14440]: W0725 20:30:03.242925   14440 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jul 25 20:30:03 old-k8s-version-20220725131610-44543 kubelet[14440]: F0725 20:30:03.242954   14440 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jul 25 20:30:03 old-k8s-version-20220725131610-44543 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 25 20:30:03 old-k8s-version-20220725131610-44543 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 25 20:30:03 old-k8s-version-20220725131610-44543 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 163.
	Jul 25 20:30:03 old-k8s-version-20220725131610-44543 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 25 20:30:03 old-k8s-version-20220725131610-44543 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 25 20:30:03 old-k8s-version-20220725131610-44543 kubelet[14452]: I0725 20:30:03.982889   14452 server.go:410] Version: v1.16.0
	Jul 25 20:30:03 old-k8s-version-20220725131610-44543 kubelet[14452]: I0725 20:30:03.983117   14452 plugins.go:100] No cloud provider specified.
	Jul 25 20:30:03 old-k8s-version-20220725131610-44543 kubelet[14452]: I0725 20:30:03.983128   14452 server.go:773] Client rotation is on, will bootstrap in background
	Jul 25 20:30:03 old-k8s-version-20220725131610-44543 kubelet[14452]: I0725 20:30:03.984769   14452 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 25 20:30:03 old-k8s-version-20220725131610-44543 kubelet[14452]: W0725 20:30:03.985534   14452 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jul 25 20:30:03 old-k8s-version-20220725131610-44543 kubelet[14452]: W0725 20:30:03.985594   14452 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jul 25 20:30:03 old-k8s-version-20220725131610-44543 kubelet[14452]: F0725 20:30:03.985628   14452 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jul 25 20:30:03 old-k8s-version-20220725131610-44543 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 25 20:30:03 old-k8s-version-20220725131610-44543 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 13:30:04.255129   61162 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220725131610-44543 -n old-k8s-version-20220725131610-44543
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220725131610-44543 -n old-k8s-version-20220725131610-44543: exit status 2 (452.675599ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-20220725131610-44543" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (491.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (43.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-20220725131741-44543 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220725131741-44543 -n no-preload-20220725131741-44543
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220725131741-44543 -n no-preload-20220725131741-44543: exit status 2 (16.112270966s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-20220725131741-44543 -n no-preload-20220725131741-44543
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-20220725131741-44543 -n no-preload-20220725131741-44543: exit status 2 (16.109259904s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-20220725131741-44543 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220725131741-44543 -n no-preload-20220725131741-44543
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-20220725131741-44543 -n no-preload-20220725131741-44543
E0725 13:25:27.385136   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/enable-default-cni-20220725125922-44543/client.crt: no such file or directory
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220725131741-44543
helpers_test.go:235: (dbg) docker inspect no-preload-20220725131741-44543:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ad80487f975914209c4d3689585546ccd655439372777ebf28416588d53520ef",
	        "Created": "2022-07-25T20:17:43.927918712Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 243415,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-25T20:19:38.654487919Z",
	            "FinishedAt": "2022-07-25T20:19:36.710324647Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/ad80487f975914209c4d3689585546ccd655439372777ebf28416588d53520ef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ad80487f975914209c4d3689585546ccd655439372777ebf28416588d53520ef/hostname",
	        "HostsPath": "/var/lib/docker/containers/ad80487f975914209c4d3689585546ccd655439372777ebf28416588d53520ef/hosts",
	        "LogPath": "/var/lib/docker/containers/ad80487f975914209c4d3689585546ccd655439372777ebf28416588d53520ef/ad80487f975914209c4d3689585546ccd655439372777ebf28416588d53520ef-json.log",
	        "Name": "/no-preload-20220725131741-44543",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-20220725131741-44543:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20220725131741-44543",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3de36809935d1c99de1598441380a1830ad6676010e517f6f9f08eac27bb9c93-init/diff:/var/lib/docker/overlay2/d54b85ebfe416686b2558e2807bd5eef62d8c3964c35bbc22b4965093a7c57f4/diff:/var/lib/docker/overlay2/5cb3ff6e7921802d23fd6cba2490df2c36857ad3d97eedcf6e537d1527a37f60/diff:/var/lib/docker/overlay2/e9074b1b2294cc5a00d3237646a71c4d30d89748eadbae26dd532d6a475c9851/diff:/var/lib/docker/overlay2/df8929c4859ecf2c8807040b0174a3f0dfa8da01532ea961cd7f4f2f0ccfbfc2/diff:/var/lib/docker/overlay2/177329168f9a1043ce94a2b7b32958779e8f685e85abf8e4c335d03d23295b26/diff:/var/lib/docker/overlay2/9eef19133d2d5e0111523234481658c1f73d7552db59842ad9ca687debda8fb3/diff:/var/lib/docker/overlay2/5c3a4cdf915d1e65e3003feefb9b9e649b432658e093e04971d5cb79e610dba9/diff:/var/lib/docker/overlay2/d1aeff9fecf983db994270b4acf2a2310c790c64d42c58d436aec3238f88b36c/diff:/var/lib/docker/overlay2/580dc19745fb3c1352983857ac5a555954893045702e7867e651e15a2d6d8732/diff:/var/lib/docker/overlay2/6c77b3
2028ad3ef74c2faf89b5080529c0391907ee9c1d7bbc00c25a5340238b/diff:/var/lib/docker/overlay2/e9e20374d05015c7a4f19eb8e14e663122cd2aef66d761bedfdef138551230b8/diff:/var/lib/docker/overlay2/cd6d64933d189089a7aaf08d7c7db50b89cc981d03b8272e4fdedb65495c3e4c/diff:/var/lib/docker/overlay2/768244a14ae888bb2a08747182035c24bd2a6a7a67f1ddae7b4e60efccb01339/diff:/var/lib/docker/overlay2/f90a95060678580a1a90ee9a563996249f7606bffcd06b680ef15234b56ab4a6/diff:/var/lib/docker/overlay2/49310159b9be5ac9bfa1dca18d816535ceb12e228963adafcf71f9d7d52d95ec/diff:/var/lib/docker/overlay2/cef5c40c08cddf01fe5ea252f4a28586c0bd1296ddc38fc37fc7e34af83c3c30/diff:/var/lib/docker/overlay2/b31bdcb9b1a69c5230400a4e1f5650c1cc8f9e27e673ff43708f9a450106733e/diff:/var/lib/docker/overlay2/899516301b3ffc21aee7cb9984f97d944fe5af0f2cd0bc5a5895bf187e4bff72/diff:/var/lib/docker/overlay2/f1bd0d2fc2ce458c0b6364dde56610c3d26bf8dc2cca2ae4e34a46f173f44595/diff:/var/lib/docker/overlay2/9f0da14f9c19d5a719554996b34f3ef80a0378921181aaf89ca41339ccc5bee3/diff:/var/lib/d
ocker/overlay2/4d6e8c40f7c65cbcfa827b62273eb37d2a0bb0407ee161b80523f11c157a52d4/diff:/var/lib/docker/overlay2/434355bebe182fb5c97b07ad8cf7a59b5474f8bc71379ff4f9690ffe31837e63/diff:/var/lib/docker/overlay2/128fb199261eb4fea9e79111861223ddb0d84438454cb7c0b9a61cc73d0796c3/diff:/var/lib/docker/overlay2/11c9cb7e5f15ac43115cb739da1135a53e08dc94ee1da27441e1d6c79eb73e62/diff:/var/lib/docker/overlay2/5e6bf225209b025da3db5a2d8862c3a0ee64417fa809d026e010644fc6619b48/diff:/var/lib/docker/overlay2/265a2d12f86abb8a607a06ec6f225186dfe622f7e23cbbf146a2ceffc7bc9bdc/diff:/var/lib/docker/overlay2/e9140b8aea381db0faf833cd2d12ff40267febe63f7c6bc2fa27384dbadf89bc/diff:/var/lib/docker/overlay2/07d7d0ab38cb08c95279e0706a65a6a4aeff8b5e72d559f8bc3dfcc0895654d6/diff:/var/lib/docker/overlay2/3a3d8346fb066863283ad39cf7f43615d7a1e99dc12aa7e8892d85d1c550bc63/diff:/var/lib/docker/overlay2/e5317b212815c832ebd2451ddd6e1f6922f350c5c1651ccfef7c5a8f34b7c1e3/diff:/var/lib/docker/overlay2/3bd7d98f0f0bf7ad2e29344a6ab4ca3211616e390da35077e76565c2074
df852/diff:/var/lib/docker/overlay2/ff63d2045cce1bb51b201079bba9d2b2a666b7331699b9407801b2fc00792bb3/diff:/var/lib/docker/overlay2/545bdf3f77f20e3db68875eda0a8829facb605119a630956ab79323688a3562a/diff:/var/lib/docker/overlay2/8f9b40124ec5ada59184358938542e028ca1e234ef1f6bce1816becf6ec8354e/diff:/var/lib/docker/overlay2/71b919cd2ece4207294cb577adab54fdba87cd9f7fdca1a53d6f376f87f11151/diff:/var/lib/docker/overlay2/b2d4a34fe28bd395d9b4ba0120173fc9df90c36a8a60a309ec088eca8930768c/diff:/var/lib/docker/overlay2/e986f180cbc34391c1ea6665283859b4c3c3061156ea1ff7196ffd869741e591/diff:/var/lib/docker/overlay2/cb98df203d42ea558f11fe84300b687cb65e6f3401b377a4466c9c8ef276ec59/diff:/var/lib/docker/overlay2/6371f9c0528fb1fb4a17bfcf3a34877fdd6d43dca1b5f06aff998d43d2010c46/diff:/var/lib/docker/overlay2/79ceef8e89c4094715c05bafb8c1427d09fb4a99f71f1a6186299303af352c98/diff:/var/lib/docker/overlay2/e034e76eeaff4e68d5158ffe54b55d122124ad82998be242fa08bec3010a0e64/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3de36809935d1c99de1598441380a1830ad6676010e517f6f9f08eac27bb9c93/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3de36809935d1c99de1598441380a1830ad6676010e517f6f9f08eac27bb9c93/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3de36809935d1c99de1598441380a1830ad6676010e517f6f9f08eac27bb9c93/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-20220725131741-44543",
	                "Source": "/var/lib/docker/volumes/no-preload-20220725131741-44543/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20220725131741-44543",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20220725131741-44543",
	                "name.minikube.sigs.k8s.io": "no-preload-20220725131741-44543",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ff3367752dba2d7f7888a1b6e610f38fe877e3282dc489be00c3af4ffc717d9a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58795"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58796"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58797"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58798"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58799"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/ff3367752dba",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20220725131741-44543": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ad80487f9759",
	                        "no-preload-20220725131741-44543"
	                    ],
	                    "NetworkID": "b1ac5d8a333e627253e80ab8f076639f114a35093181717e468951da733821e1",
	                    "EndpointID": "c13f7a6156c394b1261e1da28c4b37be6f47094fa09a3a51888abeab0903f33f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220725131741-44543 -n no-preload-20220725131741-44543
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p no-preload-20220725131741-44543 logs -n 25
E0725 13:25:29.001297   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/auto-20220725125922-44543/client.crt: no such file or directory
E0725 13:25:29.374612   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/bridge-20220725125922-44543/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p no-preload-20220725131741-44543 logs -n 25: (2.857343182s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |                 Profile                 |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p false-20220725125922-44543                     | false-20220725125922-44543              | jenkins | v1.26.0 | 25 Jul 22 13:14 PDT | 25 Jul 22 13:14 PDT |
	|         | pgrep -a kubelet                                  |                                         |         |         |                     |                     |
	| delete  | -p calico-20220725125923-44543                    | calico-20220725125923-44543             | jenkins | v1.26.0 | 25 Jul 22 13:14 PDT | 25 Jul 22 13:14 PDT |
	| start   | -p bridge-20220725125922-44543                    | bridge-20220725125922-44543             | jenkins | v1.26.0 | 25 Jul 22 13:14 PDT | 25 Jul 22 13:15 PDT |
	|         | --memory=2048                                     |                                         |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                                         |         |         |                     |                     |
	|         | --wait-timeout=5m --cni=bridge                    |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	| delete  | -p false-20220725125922-44543                     | false-20220725125922-44543              | jenkins | v1.26.0 | 25 Jul 22 13:14 PDT | 25 Jul 22 13:14 PDT |
	| start   | -p                                                | enable-default-cni-20220725125922-44543 | jenkins | v1.26.0 | 25 Jul 22 13:14 PDT | 25 Jul 22 13:15 PDT |
	|         | enable-default-cni-20220725125922-44543           |                                         |         |         |                     |                     |
	|         | --memory=2048 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                         |         |         |                     |                     |
	|         | --enable-default-cni=true                         |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	| ssh     | -p                                                | enable-default-cni-20220725125922-44543 | jenkins | v1.26.0 | 25 Jul 22 13:15 PDT | 25 Jul 22 13:15 PDT |
	|         | enable-default-cni-20220725125922-44543           |                                         |         |         |                     |                     |
	|         | pgrep -a kubelet                                  |                                         |         |         |                     |                     |
	| ssh     | -p bridge-20220725125922-44543                    | bridge-20220725125922-44543             | jenkins | v1.26.0 | 25 Jul 22 13:15 PDT | 25 Jul 22 13:15 PDT |
	|         | pgrep -a kubelet                                  |                                         |         |         |                     |                     |
	| delete  | -p bridge-20220725125922-44543                    | bridge-20220725125922-44543             | jenkins | v1.26.0 | 25 Jul 22 13:15 PDT | 25 Jul 22 13:15 PDT |
	| start   | -p                                                | kubenet-20220725125922-44543            | jenkins | v1.26.0 | 25 Jul 22 13:15 PDT | 25 Jul 22 13:16 PDT |
	|         | kubenet-20220725125922-44543                      |                                         |         |         |                     |                     |
	|         | --memory=2048                                     |                                         |         |         |                     |                     |
	|         | --alsologtostderr                                 |                                         |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                         |         |         |                     |                     |
	|         | --network-plugin=kubenet                          |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	| delete  | -p                                                | enable-default-cni-20220725125922-44543 | jenkins | v1.26.0 | 25 Jul 22 13:16 PDT | 25 Jul 22 13:16 PDT |
	|         | enable-default-cni-20220725125922-44543           |                                         |         |         |                     |                     |
	| start   | -p                                                | old-k8s-version-20220725131610-44543    | jenkins | v1.26.0 | 25 Jul 22 13:16 PDT |                     |
	|         | old-k8s-version-20220725131610-44543              |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                         |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                         |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                         |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                         |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                         |         |         |                     |                     |
	| ssh     | -p                                                | kubenet-20220725125922-44543            | jenkins | v1.26.0 | 25 Jul 22 13:16 PDT | 25 Jul 22 13:16 PDT |
	|         | kubenet-20220725125922-44543                      |                                         |         |         |                     |                     |
	|         | pgrep -a kubelet                                  |                                         |         |         |                     |                     |
	| delete  | -p                                                | kubenet-20220725125922-44543            | jenkins | v1.26.0 | 25 Jul 22 13:17 PDT | 25 Jul 22 13:17 PDT |
	|         | kubenet-20220725125922-44543                      |                                         |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220725131741-44543         | jenkins | v1.26.0 | 25 Jul 22 13:17 PDT | 25 Jul 22 13:19 PDT |
	|         | no-preload-20220725131741-44543                   |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |                                         |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220725131741-44543         | jenkins | v1.26.0 | 25 Jul 22 13:19 PDT | 25 Jul 22 13:19 PDT |
	|         | no-preload-20220725131741-44543                   |                                         |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                         |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                         |         |         |                     |                     |
	| stop    | -p                                                | no-preload-20220725131741-44543         | jenkins | v1.26.0 | 25 Jul 22 13:19 PDT | 25 Jul 22 13:19 PDT |
	|         | no-preload-20220725131741-44543                   |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                         |         |         |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220725131741-44543         | jenkins | v1.26.0 | 25 Jul 22 13:19 PDT | 25 Jul 22 13:19 PDT |
	|         | no-preload-20220725131741-44543                   |                                         |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                         |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220725131741-44543         | jenkins | v1.26.0 | 25 Jul 22 13:19 PDT | 25 Jul 22 13:24 PDT |
	|         | no-preload-20220725131741-44543                   |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |                                         |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | old-k8s-version-20220725131610-44543    | jenkins | v1.26.0 | 25 Jul 22 13:20 PDT |                     |
	|         | old-k8s-version-20220725131610-44543              |                                         |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                         |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                         |         |         |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220725131610-44543    | jenkins | v1.26.0 | 25 Jul 22 13:21 PDT | 25 Jul 22 13:21 PDT |
	|         | old-k8s-version-20220725131610-44543              |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                         |         |         |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220725131610-44543    | jenkins | v1.26.0 | 25 Jul 22 13:21 PDT | 25 Jul 22 13:21 PDT |
	|         | old-k8s-version-20220725131610-44543              |                                         |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                         |         |         |                     |                     |
	| start   | -p                                                | old-k8s-version-20220725131610-44543    | jenkins | v1.26.0 | 25 Jul 22 13:21 PDT |                     |
	|         | old-k8s-version-20220725131610-44543              |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                         |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                         |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                         |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                         |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                         |         |         |                     |                     |
	| ssh     | -p                                                | no-preload-20220725131741-44543         | jenkins | v1.26.0 | 25 Jul 22 13:24 PDT | 25 Jul 22 13:24 PDT |
	|         | no-preload-20220725131741-44543                   |                                         |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                         |         |         |                     |                     |
	| pause   | -p                                                | no-preload-20220725131741-44543         | jenkins | v1.26.0 | 25 Jul 22 13:24 PDT | 25 Jul 22 13:24 PDT |
	|         | no-preload-20220725131741-44543                   |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                         |         |         |                     |                     |
	| unpause | -p                                                | no-preload-20220725131741-44543         | jenkins | v1.26.0 | 25 Jul 22 13:25 PDT | 25 Jul 22 13:25 PDT |
	|         | no-preload-20220725131741-44543                   |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                         |         |         |                     |                     |
	|---------|---------------------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/25 13:21:53
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 13:21:53.673919   60183 out.go:296] Setting OutFile to fd 1 ...
	I0725 13:21:53.674091   60183 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 13:21:53.674097   60183 out.go:309] Setting ErrFile to fd 2...
	I0725 13:21:53.674101   60183 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 13:21:53.674202   60183 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/bin
	I0725 13:21:53.674680   60183 out.go:303] Setting JSON to false
	I0725 13:21:53.690728   60183 start.go:115] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":15685,"bootTime":1658764828,"procs":352,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0725 13:21:53.690811   60183 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0725 13:21:53.712538   60183 out.go:177] * [old-k8s-version-20220725131610-44543] minikube v1.26.0 on Darwin 12.4
	I0725 13:21:53.734468   60183 notify.go:193] Checking for updates...
	I0725 13:21:53.755405   60183 out.go:177]   - MINIKUBE_LOCATION=14555
	I0725 13:21:53.777462   60183 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	I0725 13:21:53.798424   60183 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 13:21:53.819416   60183 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 13:21:53.840488   60183 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube
	I0725 13:21:53.862141   60183 config.go:178] Loaded profile config "old-k8s-version-20220725131610-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0725 13:21:53.884290   60183 out.go:177] * Kubernetes 1.24.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.24.2
	I0725 13:21:53.905392   60183 driver.go:365] Setting default libvirt URI to qemu:///system
	I0725 13:21:53.973956   60183 docker.go:137] docker version: linux-20.10.17
	I0725 13:21:53.974120   60183 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 13:21:54.106665   60183 info.go:265] docker info: {ID:DEPZ:X5EQ:JUVI:7TAY:IXPP:M3QT:KGJK:NK3N:YSWM:HAHS:YLUF:RBE6 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 20:21:54.051064083 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 13:21:54.128839   60183 out.go:177] * Using the docker driver based on existing profile
	I0725 13:21:54.150256   60183 start.go:284] selected driver: docker
	I0725 13:21:54.150312   60183 start.go:808] validating driver "docker" against &{Name:old-k8s-version-20220725131610-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220725131610-44543 N
amespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: M
ultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 13:21:54.150444   60183 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 13:21:54.153661   60183 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 13:21:54.288038   60183 info.go:265] docker info: {ID:DEPZ:X5EQ:JUVI:7TAY:IXPP:M3QT:KGJK:NK3N:YSWM:HAHS:YLUF:RBE6 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 20:21:54.230541816 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 13:21:54.288195   60183 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 13:21:54.288211   60183 cni.go:95] Creating CNI manager for ""
	I0725 13:21:54.288221   60183 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 13:21:54.288229   60183 start_flags.go:310] config:
	{Name:old-k8s-version-20220725131610-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220725131610-44543 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mount
String:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 13:21:54.310268   60183 out.go:177] * Starting control plane node old-k8s-version-20220725131610-44543 in cluster old-k8s-version-20220725131610-44543
	I0725 13:21:54.332068   60183 cache.go:120] Beginning downloading kic base image for docker with docker
	I0725 13:21:54.353929   60183 out.go:177] * Pulling base image ...
	I0725 13:21:54.396171   60183 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0725 13:21:54.396230   60183 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0725 13:21:54.396268   60183 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0725 13:21:54.396303   60183 cache.go:57] Caching tarball of preloaded images
	I0725 13:21:54.396533   60183 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0725 13:21:54.396569   60183 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0725 13:21:54.397710   60183 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/config.json ...
	I0725 13:21:54.461117   60183 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0725 13:21:54.461134   60183 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0725 13:21:54.461150   60183 cache.go:208] Successfully downloaded all kic artifacts
	I0725 13:21:54.461221   60183 start.go:370] acquiring machines lock for old-k8s-version-20220725131610-44543: {Name:mka786150aa94c7510878ab5519b8cf30abe9378 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 13:21:54.461319   60183 start.go:374] acquired machines lock for "old-k8s-version-20220725131610-44543" in 74.735µs
	I0725 13:21:54.461339   60183 start.go:95] Skipping create...Using existing machine configuration
	I0725 13:21:54.461349   60183 fix.go:55] fixHost starting: 
	I0725 13:21:54.461599   60183 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220725131610-44543 --format={{.State.Status}}
	I0725 13:21:54.529917   60183 fix.go:103] recreateIfNeeded on old-k8s-version-20220725131610-44543: state=Stopped err=<nil>
	W0725 13:21:54.529947   60183 fix.go:129] unexpected machine state, will restart: <nil>
	I0725 13:21:54.573533   60183 out.go:177] * Restarting existing docker container for "old-k8s-version-20220725131610-44543" ...
	I0725 13:21:52.402493   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:21:54.901739   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:21:54.594675   60183 cli_runner.go:164] Run: docker start old-k8s-version-20220725131610-44543
	I0725 13:21:54.964125   60183 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220725131610-44543 --format={{.State.Status}}
	I0725 13:21:55.037820   60183 kic.go:415] container "old-k8s-version-20220725131610-44543" state is running.
	I0725 13:21:55.038433   60183 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220725131610-44543
	I0725 13:21:55.113560   60183 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/config.json ...
	I0725 13:21:55.114030   60183 machine.go:88] provisioning docker machine ...
	I0725 13:21:55.114068   60183 ubuntu.go:169] provisioning hostname "old-k8s-version-20220725131610-44543"
	I0725 13:21:55.114171   60183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725131610-44543
	I0725 13:21:55.190035   60183 main.go:134] libmachine: Using SSH client type: native
	I0725 13:21:55.190239   60183 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 58933 <nil> <nil>}
	I0725 13:21:55.190254   60183 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220725131610-44543 && echo "old-k8s-version-20220725131610-44543" | sudo tee /etc/hostname
	I0725 13:21:55.319366   60183 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220725131610-44543
	
	I0725 13:21:55.319439   60183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725131610-44543
	I0725 13:21:55.392552   60183 main.go:134] libmachine: Using SSH client type: native
	I0725 13:21:55.392712   60183 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 58933 <nil> <nil>}
	I0725 13:21:55.392732   60183 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220725131610-44543' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220725131610-44543/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220725131610-44543' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 13:21:55.513463   60183 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 13:21:55.513485   60183 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube}
	I0725 13:21:55.513514   60183 ubuntu.go:177] setting up certificates
	I0725 13:21:55.513524   60183 provision.go:83] configureAuth start
	I0725 13:21:55.513588   60183 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220725131610-44543
	I0725 13:21:55.584163   60183 provision.go:138] copyHostCerts
	I0725 13:21:55.584244   60183 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem, removing ...
	I0725 13:21:55.584253   60183 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem
	I0725 13:21:55.584354   60183 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem (1082 bytes)
	I0725 13:21:55.584593   60183 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem, removing ...
	I0725 13:21:55.584602   60183 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem
	I0725 13:21:55.584658   60183 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem (1123 bytes)
	I0725 13:21:55.584799   60183 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem, removing ...
	I0725 13:21:55.584805   60183 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem
	I0725 13:21:55.584862   60183 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem (1675 bytes)
	I0725 13:21:55.584974   60183 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220725131610-44543 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220725131610-44543]
	I0725 13:21:55.687712   60183 provision.go:172] copyRemoteCerts
	I0725 13:21:55.687798   60183 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 13:21:55.687857   60183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725131610-44543
	I0725 13:21:55.758975   60183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58933 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/old-k8s-version-20220725131610-44543/id_rsa Username:docker}
	I0725 13:21:55.843244   60183 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0725 13:21:55.859895   60183 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
	I0725 13:21:55.876505   60183 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 13:21:55.893602   60183 provision.go:86] duration metric: configureAuth took 380.052293ms
	I0725 13:21:55.893616   60183 ubuntu.go:193] setting minikube options for container-runtime
	I0725 13:21:55.893756   60183 config.go:178] Loaded profile config "old-k8s-version-20220725131610-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0725 13:21:55.893807   60183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725131610-44543
	I0725 13:21:55.964720   60183 main.go:134] libmachine: Using SSH client type: native
	I0725 13:21:55.964908   60183 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 58933 <nil> <nil>}
	I0725 13:21:55.964920   60183 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 13:21:56.084753   60183 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0725 13:21:56.084769   60183 ubuntu.go:71] root file system type: overlay
	I0725 13:21:56.084915   60183 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 13:21:56.084981   60183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725131610-44543
	I0725 13:21:56.155842   60183 main.go:134] libmachine: Using SSH client type: native
	I0725 13:21:56.155981   60183 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 58933 <nil> <nil>}
	I0725 13:21:56.156032   60183 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 13:21:56.286190   60183 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 13:21:56.286275   60183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725131610-44543
	I0725 13:21:56.357571   60183 main.go:134] libmachine: Using SSH client type: native
	I0725 13:21:56.357744   60183 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 58933 <nil> <nil>}
	I0725 13:21:56.357760   60183 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 13:21:56.482497   60183 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 13:21:56.482513   60183 machine.go:91] provisioned docker machine in 1.368435196s
	I0725 13:21:56.482522   60183 start.go:307] post-start starting for "old-k8s-version-20220725131610-44543" (driver="docker")
	I0725 13:21:56.482527   60183 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 13:21:56.482601   60183 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 13:21:56.482652   60183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725131610-44543
	I0725 13:21:56.554006   60183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58933 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/old-k8s-version-20220725131610-44543/id_rsa Username:docker}
	I0725 13:21:56.642412   60183 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 13:21:56.645967   60183 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0725 13:21:56.645982   60183 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0725 13:21:56.645989   60183 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0725 13:21:56.645993   60183 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0725 13:21:56.646005   60183 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/addons for local assets ...
	I0725 13:21:56.646118   60183 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files for local assets ...
	I0725 13:21:56.646284   60183 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem -> 445432.pem in /etc/ssl/certs
	I0725 13:21:56.646439   60183 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 13:21:56.653543   60183 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem --> /etc/ssl/certs/445432.pem (1708 bytes)
	I0725 13:21:56.673151   60183 start.go:310] post-start completed in 190.601782ms
	I0725 13:21:56.673236   60183 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 13:21:56.673292   60183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725131610-44543
	I0725 13:21:56.745535   60183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58933 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/old-k8s-version-20220725131610-44543/id_rsa Username:docker}
	I0725 13:21:56.830784   60183 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0725 13:21:56.836577   60183 fix.go:57] fixHost completed within 2.375156628s
	I0725 13:21:56.836597   60183 start.go:82] releasing machines lock for "old-k8s-version-20220725131610-44543", held for 2.375196554s
	I0725 13:21:56.836691   60183 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220725131610-44543
	I0725 13:21:56.908406   60183 ssh_runner.go:195] Run: systemctl --version
	I0725 13:21:56.908410   60183 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0725 13:21:56.908468   60183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725131610-44543
	I0725 13:21:56.908476   60183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725131610-44543
	I0725 13:21:56.984091   60183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58933 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/old-k8s-version-20220725131610-44543/id_rsa Username:docker}
	I0725 13:21:56.985901   60183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58933 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/old-k8s-version-20220725131610-44543/id_rsa Username:docker}
	I0725 13:21:57.198212   60183 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 13:21:57.207890   60183 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0725 13:21:57.207956   60183 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 13:21:57.219448   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 13:21:57.232370   60183 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 13:21:57.302875   60183 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 13:21:57.376726   60183 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 13:21:57.442738   60183 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 13:21:57.646325   60183 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 13:21:57.685082   60183 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 13:21:57.778355   60183 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.17 ...
	I0725 13:21:57.778528   60183 cli_runner.go:164] Run: docker exec -t old-k8s-version-20220725131610-44543 dig +short host.docker.internal
	I0725 13:21:57.907625   60183 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0725 13:21:57.907747   60183 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0725 13:21:57.911756   60183 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 13:21:57.921003   60183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220725131610-44543
	I0725 13:21:57.991786   60183 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0725 13:21:57.991860   60183 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 13:21:58.022698   60183 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0725 13:21:58.022711   60183 docker.go:542] Images already preloaded, skipping extraction
	I0725 13:21:58.022798   60183 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 13:21:58.052074   60183 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0725 13:21:58.052091   60183 cache_images.go:84] Images are preloaded, skipping loading
	I0725 13:21:58.052214   60183 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0725 13:21:58.125974   60183 cni.go:95] Creating CNI manager for ""
	I0725 13:21:58.125987   60183 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 13:21:58.126001   60183 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0725 13:21:58.126035   60183 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220725131610-44543 NodeName:old-k8s-version-20220725131610-44543 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:systemd Clien
tCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0725 13:21:58.126181   60183 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-20220725131610-44543"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20220725131610-44543
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 13:21:58.126269   60183 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-20220725131610-44543 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220725131610-44543 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0725 13:21:58.126356   60183 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0725 13:21:58.134118   60183 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 13:21:58.134189   60183 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 13:21:58.141324   60183 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0725 13:21:58.154757   60183 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 13:21:58.167498   60183 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0725 13:21:58.179668   60183 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0725 13:21:58.183227   60183 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 13:21:58.192523   60183 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543 for IP: 192.168.67.2
	I0725 13:21:58.192631   60183 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.key
	I0725 13:21:58.192684   60183 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.key
	I0725 13:21:58.192765   60183 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/client.key
	I0725 13:21:58.192828   60183 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/apiserver.key.c7fa3a9e
	I0725 13:21:58.192872   60183 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/proxy-client.key
	I0725 13:21:58.193074   60183 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543.pem (1338 bytes)
	W0725 13:21:58.193119   60183 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543_empty.pem, impossibly tiny 0 bytes
	I0725 13:21:58.193132   60183 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 13:21:58.193167   60183 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem (1082 bytes)
	I0725 13:21:58.193202   60183 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem (1123 bytes)
	I0725 13:21:58.193229   60183 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem (1675 bytes)
	I0725 13:21:58.193300   60183 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem (1708 bytes)
	I0725 13:21:58.193838   60183 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0725 13:21:58.210321   60183 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0725 13:21:58.228970   60183 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 13:21:58.245718   60183 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 13:21:58.262421   60183 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 13:21:58.279214   60183 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 13:21:58.297844   60183 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 13:21:58.314779   60183 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0725 13:21:58.331511   60183 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543.pem --> /usr/share/ca-certificates/44543.pem (1338 bytes)
	I0725 13:21:58.348755   60183 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem --> /usr/share/ca-certificates/445432.pem (1708 bytes)
	I0725 13:21:58.365526   60183 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 13:21:58.382721   60183 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 13:21:58.395675   60183 ssh_runner.go:195] Run: openssl version
	I0725 13:21:58.401635   60183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 13:21:58.409787   60183 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 13:21:58.413787   60183 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 25 19:19 /usr/share/ca-certificates/minikubeCA.pem
	I0725 13:21:58.413829   60183 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 13:21:58.419159   60183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 13:21:58.426230   60183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44543.pem && ln -fs /usr/share/ca-certificates/44543.pem /etc/ssl/certs/44543.pem"
	I0725 13:21:58.434193   60183 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44543.pem
	I0725 13:21:58.438053   60183 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 25 19:24 /usr/share/ca-certificates/44543.pem
	I0725 13:21:58.438096   60183 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44543.pem
	I0725 13:21:58.443183   60183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/44543.pem /etc/ssl/certs/51391683.0"
	I0725 13:21:58.450469   60183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/445432.pem && ln -fs /usr/share/ca-certificates/445432.pem /etc/ssl/certs/445432.pem"
	I0725 13:21:58.457925   60183 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/445432.pem
	I0725 13:21:58.461769   60183 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 25 19:24 /usr/share/ca-certificates/445432.pem
	I0725 13:21:58.461816   60183 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/445432.pem
	I0725 13:21:58.467074   60183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/445432.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 13:21:58.474326   60183 kubeadm.go:395] StartCluster: {Name:old-k8s-version-20220725131610-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220725131610-44543 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:fa
lse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 13:21:58.474425   60183 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 13:21:58.502814   60183 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 13:21:58.510458   60183 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0725 13:21:58.510472   60183 kubeadm.go:626] restartCluster start
	I0725 13:21:58.510516   60183 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 13:21:58.517042   60183 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:21:58.517101   60183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220725131610-44543
	I0725 13:21:58.590607   60183 kubeconfig.go:116] verify returned: extract IP: "old-k8s-version-20220725131610-44543" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	I0725 13:21:58.590795   60183 kubeconfig.go:127] "old-k8s-version-20220725131610-44543" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig - will repair!
	I0725 13:21:58.591098   60183 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig: {Name:mk33e723b045d7cbb2c524ed31a7e78a4a0b6415 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 13:21:58.592462   60183 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 13:21:58.600334   60183 api_server.go:165] Checking apiserver status ...
	I0725 13:21:58.600385   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:21:58.608459   60183 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:21:57.402044   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:21:59.904598   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:21:58.808842   60183 api_server.go:165] Checking apiserver status ...
	I0725 13:21:58.808962   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:21:58.817999   60183 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:21:59.008657   60183 api_server.go:165] Checking apiserver status ...
	I0725 13:21:59.008819   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:21:59.019192   60183 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:21:59.210602   60183 api_server.go:165] Checking apiserver status ...
	I0725 13:21:59.210815   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:21:59.221605   60183 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:21:59.408833   60183 api_server.go:165] Checking apiserver status ...
	I0725 13:21:59.408950   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:21:59.417472   60183 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:21:59.609619   60183 api_server.go:165] Checking apiserver status ...
	I0725 13:21:59.609820   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:21:59.621045   60183 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:21:59.809314   60183 api_server.go:165] Checking apiserver status ...
	I0725 13:21:59.809409   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:21:59.821368   60183 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:22:00.008728   60183 api_server.go:165] Checking apiserver status ...
	I0725 13:22:00.008894   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:22:00.018811   60183 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:22:00.208723   60183 api_server.go:165] Checking apiserver status ...
	I0725 13:22:00.208885   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:22:00.219444   60183 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:22:00.408638   60183 api_server.go:165] Checking apiserver status ...
	I0725 13:22:00.408732   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:22:00.417392   60183 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:22:00.610672   60183 api_server.go:165] Checking apiserver status ...
	I0725 13:22:00.610860   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:22:00.621365   60183 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:22:00.808746   60183 api_server.go:165] Checking apiserver status ...
	I0725 13:22:00.808881   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:22:00.818878   60183 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:22:01.009664   60183 api_server.go:165] Checking apiserver status ...
	I0725 13:22:01.009771   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:22:01.020457   60183 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:22:01.208785   60183 api_server.go:165] Checking apiserver status ...
	I0725 13:22:01.208891   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:22:01.217523   60183 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:22:01.409152   60183 api_server.go:165] Checking apiserver status ...
	I0725 13:22:01.409246   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:22:01.418133   60183 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:22:01.608696   60183 api_server.go:165] Checking apiserver status ...
	I0725 13:22:01.608826   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:22:01.618526   60183 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:22:01.618536   60183 api_server.go:165] Checking apiserver status ...
	I0725 13:22:01.618580   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:22:01.626858   60183 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:22:01.626869   60183 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0725 13:22:01.626876   60183 kubeadm.go:1092] stopping kube-system containers ...
	I0725 13:22:01.626930   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 13:22:01.657002   60183 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 13:22:01.667081   60183 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 13:22:01.674438   60183 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5747 Jul 25 20:18 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5783 Jul 25 20:18 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5935 Jul 25 20:18 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5731 Jul 25 20:18 /etc/kubernetes/scheduler.conf
	
	I0725 13:22:01.674489   60183 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 13:22:01.681528   60183 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 13:22:01.688711   60183 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 13:22:01.695801   60183 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 13:22:01.703394   60183 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 13:22:01.710791   60183 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0725 13:22:01.710802   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:22:01.761240   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:22:02.581237   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:22:02.790070   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:22:02.852549   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:22:02.904809   60183 api_server.go:51] waiting for apiserver process to appear ...
	I0725 13:22:02.904874   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:03.415372   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:02.402433   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:22:04.905321   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:22:03.913636   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:04.413713   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:04.913407   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:05.413354   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:05.913417   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:06.413486   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:06.915522   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:07.414044   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:07.915400   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:08.413594   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:07.403006   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:22:09.903435   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:22:08.915541   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:09.413617   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:09.914519   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:10.413482   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:10.915473   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:11.413695   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:11.913720   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:12.414308   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:12.914018   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:13.413606   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:12.403283   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:22:14.905458   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:22:13.913946   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:14.415576   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:14.915757   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:15.413747   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:15.913751   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:16.413958   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:16.913984   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:17.413766   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:17.915900   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:18.414053   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:17.402506   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:22:19.905164   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:22:18.915793   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:19.413913   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:19.914049   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:20.413988   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:20.914020   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:21.414013   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:21.914244   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:22.414678   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:22.915965   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:23.416018   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:22.403157   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:22:24.905435   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:22:23.913889   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:24.414309   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:24.914097   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:25.416085   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:25.914557   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:26.414004   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:26.913961   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:27.415609   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:27.915017   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:28.414405   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:27.402921   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:22:29.903644   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:22:31.905897   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:22:28.914648   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:29.416017   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:29.914651   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:30.416213   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:30.914111   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:31.414680   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:31.914434   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:32.416332   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:32.914962   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:33.415016   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:34.403864   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:22:36.404403   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:22:33.914290   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:34.416347   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:34.915975   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:35.414989   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:35.914255   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:36.416340   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:36.914596   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:37.414585   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:37.914476   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:38.415904   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:38.405715   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:22:40.905906   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:22:38.914361   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:39.415270   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:39.914715   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:40.414871   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:40.915265   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:41.414455   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:41.915093   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:42.414441   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:42.914544   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:43.414430   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:43.406177   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:22:45.904207   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:22:43.914464   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:44.414683   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:44.915872   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:45.415033   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:45.916689   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:46.415363   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:46.914864   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:47.415651   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:47.914762   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:48.414887   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:47.905078   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:22:50.404518   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:22:48.914639   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:49.415136   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:49.914786   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:50.415401   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:50.916109   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:51.415002   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:51.915039   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:52.415197   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:52.916879   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:53.414799   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:52.904686   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:22:55.403351   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:22:53.916903   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:54.414955   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:54.915486   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:55.415041   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:55.916580   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:56.415003   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:56.915430   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:57.414998   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:57.916705   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:58.415210   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:57.404576   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:22:59.906226   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:23:01.906333   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:22:58.914921   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:59.415400   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:59.915038   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:23:00.415912   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:23:00.915078   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:23:01.414978   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:23:01.915311   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:23:02.415864   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:23:02.915524   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:23:02.945401   60183 logs.go:274] 0 containers: []
	W0725 13:23:02.945413   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:23:02.945478   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:23:02.973644   60183 logs.go:274] 0 containers: []
	W0725 13:23:02.973657   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:23:02.973724   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:23:03.002721   60183 logs.go:274] 0 containers: []
	W0725 13:23:03.002734   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:23:03.002788   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:23:03.031519   60183 logs.go:274] 0 containers: []
	W0725 13:23:03.031535   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:23:03.031603   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:23:03.061426   60183 logs.go:274] 0 containers: []
	W0725 13:23:03.061439   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:23:03.061493   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:23:03.089574   60183 logs.go:274] 0 containers: []
	W0725 13:23:03.089587   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:23:03.089645   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:23:03.118793   60183 logs.go:274] 0 containers: []
	W0725 13:23:03.118804   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:23:03.118869   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:23:03.148187   60183 logs.go:274] 0 containers: []
	W0725 13:23:03.148199   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:23:03.148205   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:23:03.148211   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:23:03.189187   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:23:03.189204   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:23:03.200922   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:23:03.200939   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:23:03.253329   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:23:03.253345   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:23:03.253354   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:23:03.267096   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:23:03.267108   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:23:04.404910   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:23:06.406259   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:23:05.318288   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051108125s)
	I0725 13:23:07.820791   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:23:07.916670   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:23:07.946805   60183 logs.go:274] 0 containers: []
	W0725 13:23:07.946817   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:23:07.946877   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:23:07.976713   60183 logs.go:274] 0 containers: []
	W0725 13:23:07.976727   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:23:07.976787   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:23:08.008280   60183 logs.go:274] 0 containers: []
	W0725 13:23:08.008294   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:23:08.008368   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:23:08.039002   60183 logs.go:274] 0 containers: []
	W0725 13:23:08.039018   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:23:08.039079   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:23:08.068905   60183 logs.go:274] 0 containers: []
	W0725 13:23:08.068916   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:23:08.068975   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:23:08.097527   60183 logs.go:274] 0 containers: []
	W0725 13:23:08.097539   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:23:08.097606   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:23:08.125958   60183 logs.go:274] 0 containers: []
	W0725 13:23:08.125970   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:23:08.126034   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:23:08.154963   60183 logs.go:274] 0 containers: []
	W0725 13:23:08.154976   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:23:08.154983   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:23:08.154989   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:23:08.199198   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:23:08.199212   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:23:08.210469   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:23:08.210485   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:23:08.263518   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:23:08.263531   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:23:08.263538   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:23:08.277559   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:23:08.277572   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:23:08.907160   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:23:10.907402   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:23:10.326919   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.04927568s)
	I0725 13:23:12.827696   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:23:12.915338   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:23:12.944085   60183 logs.go:274] 0 containers: []
	W0725 13:23:12.944096   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:23:12.944151   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:23:12.974168   60183 logs.go:274] 0 containers: []
	W0725 13:23:12.974180   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:23:12.974244   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:23:13.002821   60183 logs.go:274] 0 containers: []
	W0725 13:23:13.002833   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:23:13.002887   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:23:13.031211   60183 logs.go:274] 0 containers: []
	W0725 13:23:13.031224   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:23:13.031281   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:23:13.060657   60183 logs.go:274] 0 containers: []
	W0725 13:23:13.060672   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:23:13.060728   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:23:13.089071   60183 logs.go:274] 0 containers: []
	W0725 13:23:13.089083   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:23:13.089145   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:23:13.118878   60183 logs.go:274] 0 containers: []
	W0725 13:23:13.118891   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:23:13.118949   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:23:13.147109   60183 logs.go:274] 0 containers: []
	W0725 13:23:13.147120   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:23:13.147149   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:23:13.147161   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:23:13.159243   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:23:13.159254   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:23:13.212182   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:23:13.212193   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:23:13.212202   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:23:13.227312   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:23:13.227327   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:23:13.406101   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:23:15.907656   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:23:15.282546   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.0551466s)
	I0725 13:23:15.282653   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:23:15.282659   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:23:17.824516   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:23:17.916271   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:23:17.946818   60183 logs.go:274] 0 containers: []
	W0725 13:23:17.946831   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:23:17.946889   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:23:17.975561   60183 logs.go:274] 0 containers: []
	W0725 13:23:17.975573   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:23:17.975634   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:23:18.004924   60183 logs.go:274] 0 containers: []
	W0725 13:23:18.004936   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:23:18.004998   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:23:18.033904   60183 logs.go:274] 0 containers: []
	W0725 13:23:18.033916   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:23:18.033972   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:23:18.063640   60183 logs.go:274] 0 containers: []
	W0725 13:23:18.063653   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:23:18.063713   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:23:18.091848   60183 logs.go:274] 0 containers: []
	W0725 13:23:18.091864   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:23:18.091918   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:23:18.120698   60183 logs.go:274] 0 containers: []
	W0725 13:23:18.120710   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:23:18.120772   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:23:18.150302   60183 logs.go:274] 0 containers: []
	W0725 13:23:18.150314   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:23:18.150321   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:23:18.150328   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:23:18.189307   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:23:18.189321   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:23:18.201238   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:23:18.201251   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:23:18.257070   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:23:18.257081   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:23:18.257091   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:23:18.271090   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:23:18.271102   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:23:18.406128   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:23:20.907817   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:23:20.325947   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054774189s)
	I0725 13:23:22.827182   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:23:22.916614   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:23:22.947019   60183 logs.go:274] 0 containers: []
	W0725 13:23:22.947032   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:23:22.947094   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:23:22.976102   60183 logs.go:274] 0 containers: []
	W0725 13:23:22.976115   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:23:22.976175   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:23:23.005390   60183 logs.go:274] 0 containers: []
	W0725 13:23:23.005405   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:23:23.005472   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:23:23.036043   60183 logs.go:274] 0 containers: []
	W0725 13:23:23.036058   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:23:23.036113   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:23:23.065291   60183 logs.go:274] 0 containers: []
	W0725 13:23:23.065303   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:23:23.065362   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:23:23.094601   60183 logs.go:274] 0 containers: []
	W0725 13:23:23.094612   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:23:23.094677   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:23:23.124130   60183 logs.go:274] 0 containers: []
	W0725 13:23:23.124142   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:23:23.124197   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:23:23.152885   60183 logs.go:274] 0 containers: []
	W0725 13:23:23.152898   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:23:23.152906   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:23:23.152915   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:23:23.207267   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:23:23.207277   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:23:23.207303   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:23:23.220621   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:23:23.220633   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:23:23.407291   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:23:25.907430   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:23:25.277761   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057055425s)
	I0725 13:23:25.277872   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:23:25.277880   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:23:25.317120   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:23:25.317134   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:23:27.830407   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:23:27.916091   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:23:27.947876   60183 logs.go:274] 0 containers: []
	W0725 13:23:27.947889   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:23:27.947943   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:23:27.976656   60183 logs.go:274] 0 containers: []
	W0725 13:23:27.976668   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:23:27.976726   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:23:28.005656   60183 logs.go:274] 0 containers: []
	W0725 13:23:28.005669   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:23:28.005726   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:23:28.035060   60183 logs.go:274] 0 containers: []
	W0725 13:23:28.035072   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:23:28.035132   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:23:28.063371   60183 logs.go:274] 0 containers: []
	W0725 13:23:28.063395   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:23:28.063456   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:23:28.093066   60183 logs.go:274] 0 containers: []
	W0725 13:23:28.093078   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:23:28.093142   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:23:28.121760   60183 logs.go:274] 0 containers: []
	W0725 13:23:28.121773   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:23:28.121829   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:23:28.150873   60183 logs.go:274] 0 containers: []
	W0725 13:23:28.150885   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:23:28.150891   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:23:28.150901   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:23:28.166253   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:23:28.166265   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:23:28.405040   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:23:30.405887   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:23:30.219274   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052936893s)
	I0725 13:23:30.219386   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:23:30.219393   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:23:30.259179   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:23:30.259192   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:23:30.270501   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:23:30.270513   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:23:30.323106   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:23:32.825418   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:23:32.915954   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:23:32.946505   60183 logs.go:274] 0 containers: []
	W0725 13:23:32.946517   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:23:32.946580   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:23:32.976363   60183 logs.go:274] 0 containers: []
	W0725 13:23:32.976376   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:23:32.976442   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:23:33.004925   60183 logs.go:274] 0 containers: []
	W0725 13:23:33.004938   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:23:33.004996   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:23:33.034716   60183 logs.go:274] 0 containers: []
	W0725 13:23:33.034728   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:23:33.034788   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:23:33.062554   60183 logs.go:274] 0 containers: []
	W0725 13:23:33.062566   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:23:33.062623   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:23:33.091734   60183 logs.go:274] 0 containers: []
	W0725 13:23:33.091746   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:23:33.091805   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:23:33.120846   60183 logs.go:274] 0 containers: []
	W0725 13:23:33.120858   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:23:33.120924   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:23:33.149461   60183 logs.go:274] 0 containers: []
	W0725 13:23:33.149474   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:23:33.149481   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:23:33.149492   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:23:33.188609   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:23:33.188621   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:23:33.200250   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:23:33.200263   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:23:33.252688   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:23:33.252701   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:23:33.252711   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:23:33.266791   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:23:33.266803   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:23:32.907384   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:23:34.907760   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:23:35.325253   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05837666s)
	I0725 13:23:37.827729   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:23:37.918114   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:23:37.948670   60183 logs.go:274] 0 containers: []
	W0725 13:23:37.948682   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:23:37.948740   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:23:37.978509   60183 logs.go:274] 0 containers: []
	W0725 13:23:37.978521   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:23:37.978606   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:23:38.008790   60183 logs.go:274] 0 containers: []
	W0725 13:23:38.008805   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:23:38.008873   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:23:38.037601   60183 logs.go:274] 0 containers: []
	W0725 13:23:38.037614   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:23:38.037674   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:23:38.066393   60183 logs.go:274] 0 containers: []
	W0725 13:23:38.066407   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:23:38.066480   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:23:38.094341   60183 logs.go:274] 0 containers: []
	W0725 13:23:38.094354   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:23:38.094413   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:23:38.123151   60183 logs.go:274] 0 containers: []
	W0725 13:23:38.123163   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:23:38.123228   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:23:38.151883   60183 logs.go:274] 0 containers: []
	W0725 13:23:38.151894   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:23:38.151901   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:23:38.151913   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:23:38.164057   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:23:38.164070   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:23:38.217391   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:23:38.217404   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:23:38.217411   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:23:38.232266   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:23:38.232279   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:23:37.406227   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:23:39.905335   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:23:41.906715   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:23:40.284014   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051664102s)
	I0725 13:23:40.284120   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:23:40.284127   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:23:42.824116   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:23:42.916418   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:23:42.945082   60183 logs.go:274] 0 containers: []
	W0725 13:23:42.945095   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:23:42.945161   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:23:42.976210   60183 logs.go:274] 0 containers: []
	W0725 13:23:42.976221   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:23:42.976283   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:23:43.004760   60183 logs.go:274] 0 containers: []
	W0725 13:23:43.004772   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:23:43.004828   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:23:43.034045   60183 logs.go:274] 0 containers: []
	W0725 13:23:43.034057   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:23:43.034136   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:23:43.063735   60183 logs.go:274] 0 containers: []
	W0725 13:23:43.063747   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:23:43.063807   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:23:43.092971   60183 logs.go:274] 0 containers: []
	W0725 13:23:43.092984   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:23:43.093046   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:23:43.122089   60183 logs.go:274] 0 containers: []
	W0725 13:23:43.122102   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:23:43.122165   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:23:43.151913   60183 logs.go:274] 0 containers: []
	W0725 13:23:43.151927   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:23:43.151933   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:23:43.151940   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:23:43.191482   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:23:43.191500   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:23:43.204833   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:23:43.204851   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:23:43.266710   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:23:43.266721   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:23:43.266728   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:23:43.280481   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:23:43.280493   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:23:44.406475   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:23:46.406672   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:23:45.335689   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055123517s)
	I0725 13:23:47.836914   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:23:47.916508   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:23:47.947483   60183 logs.go:274] 0 containers: []
	W0725 13:23:47.947496   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:23:47.947555   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:23:47.976844   60183 logs.go:274] 0 containers: []
	W0725 13:23:47.976858   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:23:47.976921   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:23:48.006778   60183 logs.go:274] 0 containers: []
	W0725 13:23:48.006790   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:23:48.006847   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:23:48.036361   60183 logs.go:274] 0 containers: []
	W0725 13:23:48.036374   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:23:48.036438   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:23:48.066116   60183 logs.go:274] 0 containers: []
	W0725 13:23:48.066132   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:23:48.066196   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:23:48.095236   60183 logs.go:274] 0 containers: []
	W0725 13:23:48.095249   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:23:48.095308   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:23:48.124615   60183 logs.go:274] 0 containers: []
	W0725 13:23:48.124627   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:23:48.124684   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:23:48.154933   60183 logs.go:274] 0 containers: []
	W0725 13:23:48.154945   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:23:48.154951   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:23:48.154958   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:23:48.196269   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:23:48.196282   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:23:48.208071   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:23:48.208082   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:23:48.261791   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:23:48.261801   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:23:48.261807   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:23:48.275612   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:23:48.275624   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:23:48.408066   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:23:50.906907   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:23:50.328143   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05244781s)
	I0725 13:23:52.830598   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:23:52.916555   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:23:52.946432   60183 logs.go:274] 0 containers: []
	W0725 13:23:52.946449   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:23:52.946523   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:23:52.976593   60183 logs.go:274] 0 containers: []
	W0725 13:23:52.976605   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:23:52.976673   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:23:53.010114   60183 logs.go:274] 0 containers: []
	W0725 13:23:53.010126   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:23:53.010182   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:23:53.038708   60183 logs.go:274] 0 containers: []
	W0725 13:23:53.038720   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:23:53.038781   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:23:53.067454   60183 logs.go:274] 0 containers: []
	W0725 13:23:53.067466   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:23:53.067528   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:23:53.095959   60183 logs.go:274] 0 containers: []
	W0725 13:23:53.095971   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:23:53.096030   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:23:53.126372   60183 logs.go:274] 0 containers: []
	W0725 13:23:53.126385   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:23:53.126450   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:23:53.155509   60183 logs.go:274] 0 containers: []
	W0725 13:23:53.155523   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:23:53.155530   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:23:53.155537   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:23:53.195731   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:23:53.195744   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:23:53.207459   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:23:53.207473   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:23:53.260748   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:23:53.260768   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:23:53.260775   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:23:53.274157   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:23:53.274169   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:23:53.407193   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:23:55.408050   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:23:55.324854   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.0506133s)
	I0725 13:23:57.825573   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:23:57.917185   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:23:57.947745   60183 logs.go:274] 0 containers: []
	W0725 13:23:57.947758   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:23:57.947814   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:23:57.975616   60183 logs.go:274] 0 containers: []
	W0725 13:23:57.975628   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:23:57.975690   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:23:58.004104   60183 logs.go:274] 0 containers: []
	W0725 13:23:58.004116   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:23:58.004180   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:23:58.032249   60183 logs.go:274] 0 containers: []
	W0725 13:23:58.032261   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:23:58.032330   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:23:58.062006   60183 logs.go:274] 0 containers: []
	W0725 13:23:58.062021   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:23:58.062074   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:23:58.090537   60183 logs.go:274] 0 containers: []
	W0725 13:23:58.090548   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:23:58.090607   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:23:58.119003   60183 logs.go:274] 0 containers: []
	W0725 13:23:58.119015   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:23:58.119071   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:23:58.149646   60183 logs.go:274] 0 containers: []
	W0725 13:23:58.149660   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:23:58.149668   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:23:58.149677   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:23:57.905980   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:23:59.906275   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:24:00.207223   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057470129s)
	I0725 13:24:00.207346   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:24:00.207356   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:24:00.246278   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:24:00.246294   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:24:00.257799   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:24:00.257812   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:24:00.311151   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:24:00.311187   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:24:00.311201   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:24:02.825458   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:24:02.916753   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:24:02.946058   60183 logs.go:274] 0 containers: []
	W0725 13:24:02.946070   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:24:02.946127   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:24:02.974437   60183 logs.go:274] 0 containers: []
	W0725 13:24:02.974450   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:24:02.974506   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:24:03.004307   60183 logs.go:274] 0 containers: []
	W0725 13:24:03.004320   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:24:03.004405   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:24:03.034237   60183 logs.go:274] 0 containers: []
	W0725 13:24:03.034248   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:24:03.034308   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:24:03.066725   60183 logs.go:274] 0 containers: []
	W0725 13:24:03.066737   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:24:03.066792   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:24:03.097377   60183 logs.go:274] 0 containers: []
	W0725 13:24:03.097389   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:24:03.097449   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:24:03.126782   60183 logs.go:274] 0 containers: []
	W0725 13:24:03.126794   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:24:03.126857   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:24:03.155129   60183 logs.go:274] 0 containers: []
	W0725 13:24:03.155142   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:24:03.155149   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:24:03.155155   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:24:03.195481   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:24:03.195494   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:24:03.206820   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:24:03.206835   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:24:03.259802   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:24:03.259812   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:24:03.259818   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:24:03.273974   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:24:03.273987   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:24:02.406621   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:24:04.908053   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:24:06.901898   59920 pod_ready.go:81] duration metric: took 4m0.006312009s waiting for pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace to be "Ready" ...
	E0725 13:24:06.902006   59920 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace to be "Ready" (will not retry!)
	I0725 13:24:06.902038   59920 pod_ready.go:38] duration metric: took 4m15.045888844s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 13:24:06.902078   59920 kubeadm.go:630] restartCluster took 4m24.382062853s
	W0725 13:24:06.902211   59920 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0725 13:24:06.902238   59920 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0725 13:24:05.328003   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053944862s)
	I0725 13:24:07.828361   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:24:07.917072   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:24:07.949966   60183 logs.go:274] 0 containers: []
	W0725 13:24:07.949983   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:24:07.950052   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:24:07.988332   60183 logs.go:274] 0 containers: []
	W0725 13:24:07.988346   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:24:07.988409   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:24:08.027678   60183 logs.go:274] 0 containers: []
	W0725 13:24:08.027690   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:24:08.027756   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:24:08.059823   60183 logs.go:274] 0 containers: []
	W0725 13:24:08.059836   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:24:08.059905   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:24:08.093298   60183 logs.go:274] 0 containers: []
	W0725 13:24:08.093311   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:24:08.093374   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:24:08.131132   60183 logs.go:274] 0 containers: []
	W0725 13:24:08.131144   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:24:08.131200   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:24:08.163873   60183 logs.go:274] 0 containers: []
	W0725 13:24:08.163888   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:24:08.163950   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:24:08.195373   60183 logs.go:274] 0 containers: []
	W0725 13:24:08.195386   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:24:08.195392   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:24:08.195399   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:24:08.239634   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:24:08.239650   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:24:08.257904   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:24:08.257919   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:24:08.319885   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:24:08.319898   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:24:08.319904   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:24:08.336710   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:24:08.336724   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:24:09.233197   59920 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (2.330876471s)
	I0725 13:24:09.233255   59920 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 13:24:09.242559   59920 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 13:24:09.250536   59920 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0725 13:24:09.250580   59920 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 13:24:09.257644   59920 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 13:24:09.257666   59920 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0725 13:24:09.560158   59920 out.go:204]   - Generating certificates and keys ...
	I0725 13:24:10.140505   59920 out.go:204]   - Booting up control plane ...
	I0725 13:24:10.401425   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.06462793s)
	I0725 13:24:12.901765   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:24:12.917037   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:24:12.952668   60183 logs.go:274] 0 containers: []
	W0725 13:24:12.952681   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:24:12.952736   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:24:12.982943   60183 logs.go:274] 0 containers: []
	W0725 13:24:12.982955   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:24:12.983017   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:24:13.013797   60183 logs.go:274] 0 containers: []
	W0725 13:24:13.013810   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:24:13.013876   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:24:13.044254   60183 logs.go:274] 0 containers: []
	W0725 13:24:13.044267   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:24:13.044326   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:24:13.074217   60183 logs.go:274] 0 containers: []
	W0725 13:24:13.074230   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:24:13.074293   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:24:13.109560   60183 logs.go:274] 0 containers: []
	W0725 13:24:13.109573   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:24:13.109636   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:24:13.140893   60183 logs.go:274] 0 containers: []
	W0725 13:24:13.140906   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:24:13.140965   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:24:13.176452   60183 logs.go:274] 0 containers: []
	W0725 13:24:13.176466   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:24:13.176474   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:24:13.176482   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:24:13.221236   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:24:13.221274   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:24:13.234259   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:24:13.234274   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:24:13.291367   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:24:13.291377   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:24:13.291384   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:24:13.306619   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:24:13.306632   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:24:17.187291   59920 out.go:204]   - Configuring RBAC rules ...
	I0725 13:24:15.363070   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056365342s)
	I0725 13:24:17.865239   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:24:17.917268   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:24:17.948026   60183 logs.go:274] 0 containers: []
	W0725 13:24:17.948038   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:24:17.948094   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:24:17.978209   60183 logs.go:274] 0 containers: []
	W0725 13:24:17.978222   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:24:17.978280   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:24:18.006707   60183 logs.go:274] 0 containers: []
	W0725 13:24:18.006718   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:24:18.006775   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:24:18.037659   60183 logs.go:274] 0 containers: []
	W0725 13:24:18.037671   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:24:18.037726   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:24:18.065998   60183 logs.go:274] 0 containers: []
	W0725 13:24:18.066016   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:24:18.066075   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:24:18.096217   60183 logs.go:274] 0 containers: []
	W0725 13:24:18.096230   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:24:18.096286   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:24:18.126356   60183 logs.go:274] 0 containers: []
	W0725 13:24:18.126369   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:24:18.126427   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:24:18.155056   60183 logs.go:274] 0 containers: []
	W0725 13:24:18.155068   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:24:18.155074   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:24:18.155088   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:24:18.210436   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:24:18.210447   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:24:18.210455   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:24:18.224505   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:24:18.224517   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:24:17.598425   59920 cni.go:95] Creating CNI manager for ""
	I0725 13:24:17.598438   59920 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 13:24:17.598460   59920 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 13:24:17.598535   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=a5b59bcfc16aadb787d3d4f0635e06172b98dce6 minikube.k8s.io/name=no-preload-20220725131741-44543 minikube.k8s.io/updated_at=2022_07_25T13_24_17_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:17.598542   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:17.739103   59920 ops.go:34] apiserver oom_adj: -16
	I0725 13:24:17.739207   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:18.299078   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:18.799356   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:19.298748   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:19.798859   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:20.298802   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:20.800927   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:21.298883   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:21.798915   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:22.298950   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:20.280940   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056351635s)
	I0725 13:24:20.281045   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:24:20.281052   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:24:20.322100   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:24:20.322118   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:24:22.836188   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:24:22.918171   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:24:22.949256   60183 logs.go:274] 0 containers: []
	W0725 13:24:22.949269   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:24:22.949330   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:24:22.979856   60183 logs.go:274] 0 containers: []
	W0725 13:24:22.979872   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:24:22.979930   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:24:23.009212   60183 logs.go:274] 0 containers: []
	W0725 13:24:23.009224   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:24:23.009280   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:24:23.040003   60183 logs.go:274] 0 containers: []
	W0725 13:24:23.040014   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:24:23.040069   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:24:23.070063   60183 logs.go:274] 0 containers: []
	W0725 13:24:23.070075   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:24:23.070129   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:24:23.098168   60183 logs.go:274] 0 containers: []
	W0725 13:24:23.098181   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:24:23.098239   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:24:23.127379   60183 logs.go:274] 0 containers: []
	W0725 13:24:23.127392   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:24:23.127449   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:24:23.156617   60183 logs.go:274] 0 containers: []
	W0725 13:24:23.156630   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:24:23.156637   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:24:23.156644   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:24:23.208837   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:24:23.208847   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:24:23.208854   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:24:23.222431   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:24:23.222443   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:24:22.800905   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:23.299002   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:23.799005   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:24.299025   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:24.800802   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:25.299374   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:25.799034   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:26.300134   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:26.799616   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:27.299108   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:25.276610   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054096015s)
	I0725 13:24:25.276716   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:24:25.276723   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:24:25.317113   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:24:25.317132   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:24:27.831788   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:24:27.917665   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:24:27.951671   60183 logs.go:274] 0 containers: []
	W0725 13:24:27.951683   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:24:27.951742   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:24:27.981792   60183 logs.go:274] 0 containers: []
	W0725 13:24:27.981805   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:24:27.981861   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:24:28.010660   60183 logs.go:274] 0 containers: []
	W0725 13:24:28.010675   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:24:28.010745   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:24:28.039897   60183 logs.go:274] 0 containers: []
	W0725 13:24:28.039910   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:24:28.039966   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:24:28.069312   60183 logs.go:274] 0 containers: []
	W0725 13:24:28.069324   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:24:28.069379   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:24:28.098531   60183 logs.go:274] 0 containers: []
	W0725 13:24:28.098544   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:24:28.098599   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:24:28.127653   60183 logs.go:274] 0 containers: []
	W0725 13:24:28.127666   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:24:28.127720   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:24:28.156147   60183 logs.go:274] 0 containers: []
	W0725 13:24:28.156162   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:24:28.156169   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:24:28.156177   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:24:28.202017   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:24:28.202037   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:24:28.219890   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:24:28.219905   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:24:28.279250   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:24:28.279263   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:24:28.279270   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:24:28.294488   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:24:28.294502   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:24:27.801186   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:28.299013   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:28.800294   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:29.300649   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:29.799066   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:29.855550   59920 kubeadm.go:1045] duration metric: took 12.256709186s to wait for elevateKubeSystemPrivileges.
	I0725 13:24:29.855565   59920 kubeadm.go:397] StartCluster complete in 4m47.372303679s
	I0725 13:24:29.855580   59920 settings.go:142] acquiring lock: {Name:mk9b5e011e5806157f9a122f8c65a6cc16a3d9e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 13:24:29.855656   59920 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	I0725 13:24:29.856179   59920 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig: {Name:mk33e723b045d7cbb2c524ed31a7e78a4a0b6415 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 13:24:30.372184   59920 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220725131741-44543" rescaled to 1
	I0725 13:24:30.372224   59920 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 13:24:30.372230   59920 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0725 13:24:30.372264   59920 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0725 13:24:30.372424   59920 config.go:178] Loaded profile config "no-preload-20220725131741-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0725 13:24:30.392785   59920 out.go:177] * Verifying Kubernetes components...
	I0725 13:24:30.392883   59920 addons.go:65] Setting dashboard=true in profile "no-preload-20220725131741-44543"
	I0725 13:24:30.392886   59920 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20220725131741-44543"
	I0725 13:24:30.413664   59920 addons.go:153] Setting addon dashboard=true in "no-preload-20220725131741-44543"
	W0725 13:24:30.413684   59920 addons.go:162] addon dashboard should already be in state true
	I0725 13:24:30.413691   59920 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 13:24:30.413664   59920 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20220725131741-44543"
	W0725 13:24:30.413704   59920 addons.go:162] addon storage-provisioner should already be in state true
	I0725 13:24:30.392924   59920 addons.go:65] Setting metrics-server=true in profile "no-preload-20220725131741-44543"
	I0725 13:24:30.413726   59920 addons.go:153] Setting addon metrics-server=true in "no-preload-20220725131741-44543"
	I0725 13:24:30.413744   59920 host.go:66] Checking if "no-preload-20220725131741-44543" exists ...
	I0725 13:24:30.413746   59920 host.go:66] Checking if "no-preload-20220725131741-44543" exists ...
	I0725 13:24:30.392882   59920 addons.go:65] Setting default-storageclass=true in profile "no-preload-20220725131741-44543"
	W0725 13:24:30.413754   59920 addons.go:162] addon metrics-server should already be in state true
	I0725 13:24:30.413768   59920 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20220725131741-44543"
	I0725 13:24:30.413781   59920 host.go:66] Checking if "no-preload-20220725131741-44543" exists ...
	I0725 13:24:30.414105   59920 cli_runner.go:164] Run: docker container inspect no-preload-20220725131741-44543 --format={{.State.Status}}
	I0725 13:24:30.414123   59920 cli_runner.go:164] Run: docker container inspect no-preload-20220725131741-44543 --format={{.State.Status}}
	I0725 13:24:30.414172   59920 cli_runner.go:164] Run: docker container inspect no-preload-20220725131741-44543 --format={{.State.Status}}
	I0725 13:24:30.414226   59920 cli_runner.go:164] Run: docker container inspect no-preload-20220725131741-44543 --format={{.State.Status}}
	I0725 13:24:30.458052   59920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-20220725131741-44543
	I0725 13:24:30.458055   59920 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0725 13:24:30.572793   59920 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0725 13:24:30.579147   59920 addons.go:153] Setting addon default-storageclass=true in "no-preload-20220725131741-44543"
	I0725 13:24:30.592379   59920 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0725 13:24:30.613626   59920 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 13:24:30.646222   59920 node_ready.go:35] waiting up to 6m0s for node "no-preload-20220725131741-44543" to be "Ready" ...
	W0725 13:24:30.650686   59920 addons.go:162] addon default-storageclass should already be in state true
	I0725 13:24:30.650732   59920 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0725 13:24:30.708629   59920 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0725 13:24:30.653982   59920 node_ready.go:49] node "no-preload-20220725131741-44543" has status "Ready":"True"
	I0725 13:24:30.708646   59920 node_ready.go:38] duration metric: took 57.970336ms waiting for node "no-preload-20220725131741-44543" to be "Ready" ...
	I0725 13:24:30.687570   59920 host.go:66] Checking if "no-preload-20220725131741-44543" exists ...
	I0725 13:24:30.708657   59920 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 13:24:30.708685   59920 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 13:24:30.745654   59920 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0725 13:24:30.708708   59920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220725131741-44543
	I0725 13:24:30.709081   59920 cli_runner.go:164] Run: docker container inspect no-preload-20220725131741-44543 --format={{.State.Status}}
	I0725 13:24:30.715940   59920 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-qfmxw" in "kube-system" namespace to be "Ready" ...
	I0725 13:24:30.745705   59920 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 13:24:30.782639   59920 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0725 13:24:30.782656   59920 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0725 13:24:30.782710   59920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220725131741-44543
	I0725 13:24:30.782759   59920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220725131741-44543
	I0725 13:24:30.900399   59920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58795 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/no-preload-20220725131741-44543/id_rsa Username:docker}
	I0725 13:24:30.901115   59920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58795 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/no-preload-20220725131741-44543/id_rsa Username:docker}
	I0725 13:24:30.902059   59920 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 13:24:30.902074   59920 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 13:24:30.902161   59920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220725131741-44543
	I0725 13:24:30.904793   59920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58795 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/no-preload-20220725131741-44543/id_rsa Username:docker}
	I0725 13:24:30.984451   59920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58795 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/no-preload-20220725131741-44543/id_rsa Username:docker}
	I0725 13:24:31.086020   59920 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 13:24:31.102717   59920 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0725 13:24:31.102730   59920 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0725 13:24:31.111375   59920 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0725 13:24:31.111390   59920 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0725 13:24:31.192107   59920 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0725 13:24:31.192120   59920 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0725 13:24:31.215194   59920 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 13:24:31.291555   59920 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0725 13:24:31.291573   59920 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0725 13:24:31.298933   59920 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0725 13:24:31.298948   59920 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0725 13:24:31.381035   59920 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0725 13:24:31.381051   59920 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0725 13:24:31.390048   59920 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 13:24:31.390070   59920 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0725 13:24:31.405204   59920 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0725 13:24:31.405219   59920 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0725 13:24:31.497359   59920 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0725 13:24:31.497362   59920 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 13:24:31.497375   59920 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0725 13:24:31.591751   59920 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0725 13:24:31.591768   59920 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0725 13:24:31.593614   59920 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.135485755s)
	I0725 13:24:31.593659   59920 start.go:809] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0725 13:24:31.683293   59920 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0725 13:24:31.683308   59920 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0725 13:24:31.721561   59920 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0725 13:24:31.721575   59920 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0725 13:24:31.885159   59920 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0725 13:24:32.216124   59920 addons.go:383] Verifying addon metrics-server=true in "no-preload-20220725131741-44543"
	I0725 13:24:32.632247   59920 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0725 13:24:30.350962   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056388033s)
	I0725 13:24:32.851327   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:24:32.919793   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:24:32.950452   60183 logs.go:274] 0 containers: []
	W0725 13:24:32.950464   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:24:32.950519   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:24:32.978393   60183 logs.go:274] 0 containers: []
	W0725 13:24:32.978405   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:24:32.978461   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:24:33.008027   60183 logs.go:274] 0 containers: []
	W0725 13:24:33.008039   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:24:33.008095   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:24:33.038231   60183 logs.go:274] 0 containers: []
	W0725 13:24:33.038243   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:24:33.038297   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:24:33.068037   60183 logs.go:274] 0 containers: []
	W0725 13:24:33.068049   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:24:33.068108   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:24:33.098144   60183 logs.go:274] 0 containers: []
	W0725 13:24:33.098156   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:24:33.098219   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:24:33.131474   60183 logs.go:274] 0 containers: []
	W0725 13:24:33.131488   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:24:33.131551   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:24:33.163043   60183 logs.go:274] 0 containers: []
	W0725 13:24:33.163057   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:24:33.163064   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:24:33.163071   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:24:33.225128   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:24:33.225142   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:24:33.225148   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:24:33.240300   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:24:33.240316   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:24:32.673165   59920 addons.go:414] enableAddons completed in 2.300825743s
	I0725 13:24:32.805388   59920 pod_ready.go:102] pod "coredns-6d4b75cb6d-qfmxw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:24:34.805957   59920 pod_ready.go:102] pod "coredns-6d4b75cb6d-qfmxw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:24:35.303510   59920 pod_ready.go:97] error getting pod "coredns-6d4b75cb6d-qfmxw" in "kube-system" namespace (skipping!): pods "coredns-6d4b75cb6d-qfmxw" not found
	I0725 13:24:35.303527   59920 pod_ready.go:81] duration metric: took 4.520786298s waiting for pod "coredns-6d4b75cb6d-qfmxw" in "kube-system" namespace to be "Ready" ...
	E0725 13:24:35.303534   59920 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-6d4b75cb6d-qfmxw" in "kube-system" namespace (skipping!): pods "coredns-6d4b75cb6d-qfmxw" not found
	I0725 13:24:35.303539   59920 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-r68q6" in "kube-system" namespace to be "Ready" ...
	I0725 13:24:35.310951   59920 pod_ready.go:92] pod "coredns-6d4b75cb6d-r68q6" in "kube-system" namespace has status "Ready":"True"
	I0725 13:24:35.310962   59920 pod_ready.go:81] duration metric: took 7.417207ms waiting for pod "coredns-6d4b75cb6d-r68q6" in "kube-system" namespace to be "Ready" ...
	I0725 13:24:35.310968   59920 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-20220725131741-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:24:35.318019   59920 pod_ready.go:92] pod "etcd-no-preload-20220725131741-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:24:35.318038   59920 pod_ready.go:81] duration metric: took 7.06362ms waiting for pod "etcd-no-preload-20220725131741-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:24:35.318051   59920 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-20220725131741-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:24:35.324559   59920 pod_ready.go:92] pod "kube-apiserver-no-preload-20220725131741-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:24:35.324574   59920 pod_ready.go:81] duration metric: took 6.512119ms waiting for pod "kube-apiserver-no-preload-20220725131741-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:24:35.324587   59920 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-20220725131741-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:24:35.332274   59920 pod_ready.go:92] pod "kube-controller-manager-no-preload-20220725131741-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:24:35.332286   59920 pod_ready.go:81] duration metric: took 7.691522ms waiting for pod "kube-controller-manager-no-preload-20220725131741-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:24:35.332299   59920 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5xd86" in "kube-system" namespace to be "Ready" ...
	I0725 13:24:35.504459   59920 pod_ready.go:92] pod "kube-proxy-5xd86" in "kube-system" namespace has status "Ready":"True"
	I0725 13:24:35.504471   59920 pod_ready.go:81] duration metric: took 172.155311ms waiting for pod "kube-proxy-5xd86" in "kube-system" namespace to be "Ready" ...
	I0725 13:24:35.504480   59920 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-20220725131741-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:24:35.903818   59920 pod_ready.go:92] pod "kube-scheduler-no-preload-20220725131741-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:24:35.903828   59920 pod_ready.go:81] duration metric: took 399.331945ms waiting for pod "kube-scheduler-no-preload-20220725131741-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:24:35.903847   59920 pod_ready.go:38] duration metric: took 5.195020317s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 13:24:35.903878   59920 api_server.go:51] waiting for apiserver process to appear ...
	I0725 13:24:35.903926   59920 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:24:35.914798   59920 api_server.go:71] duration metric: took 5.542375251s to wait for apiserver process to appear ...
	I0725 13:24:35.914834   59920 api_server.go:87] waiting for apiserver healthz status ...
	I0725 13:24:35.914844   59920 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:58799/healthz ...
	I0725 13:24:35.921222   59920 api_server.go:266] https://127.0.0.1:58799/healthz returned 200:
	ok
	I0725 13:24:35.922647   59920 api_server.go:140] control plane version: v1.24.2
	I0725 13:24:35.922659   59920 api_server.go:130] duration metric: took 7.816862ms to wait for apiserver health ...
	I0725 13:24:35.922664   59920 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 13:24:36.106279   59920 system_pods.go:59] 8 kube-system pods found
	I0725 13:24:36.106293   59920 system_pods.go:61] "coredns-6d4b75cb6d-r68q6" [623858be-728c-4b5e-8e31-b5713757b87c] Running
	I0725 13:24:36.106297   59920 system_pods.go:61] "etcd-no-preload-20220725131741-44543" [85aead75-e56d-4567-b4b3-67e65f0996ad] Running
	I0725 13:24:36.106301   59920 system_pods.go:61] "kube-apiserver-no-preload-20220725131741-44543" [d465aaac-22eb-46d8-875a-0262cfd269c0] Running
	I0725 13:24:36.106304   59920 system_pods.go:61] "kube-controller-manager-no-preload-20220725131741-44543" [f2309e0f-bca3-409e-a2be-fc7577409b36] Running
	I0725 13:24:36.106307   59920 system_pods.go:61] "kube-proxy-5xd86" [f7413e53-4981-4223-bae2-7a94b1c41206] Running
	I0725 13:24:36.106317   59920 system_pods.go:61] "kube-scheduler-no-preload-20220725131741-44543" [255f59a4-6f75-45a9-8640-2bced1f641fd] Running
	I0725 13:24:36.106324   59920 system_pods.go:61] "metrics-server-5c6f97fb75-wx6t6" [3b1612e1-6629-4a77-bc5c-96599e1fbede] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 13:24:36.106330   59920 system_pods.go:61] "storage-provisioner" [1e217a5f-3b4f-491b-8ff0-b385e6032f65] Running
	I0725 13:24:36.106335   59920 system_pods.go:74] duration metric: took 183.662109ms to wait for pod list to return data ...
	I0725 13:24:36.106339   59920 default_sa.go:34] waiting for default service account to be created ...
	I0725 13:24:36.303371   59920 default_sa.go:45] found service account: "default"
	I0725 13:24:36.303384   59920 default_sa.go:55] duration metric: took 197.033995ms for default service account to be created ...
	I0725 13:24:36.303391   59920 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 13:24:36.506245   59920 system_pods.go:86] 8 kube-system pods found
	I0725 13:24:36.506263   59920 system_pods.go:89] "coredns-6d4b75cb6d-r68q6" [623858be-728c-4b5e-8e31-b5713757b87c] Running
	I0725 13:24:36.506268   59920 system_pods.go:89] "etcd-no-preload-20220725131741-44543" [85aead75-e56d-4567-b4b3-67e65f0996ad] Running
	I0725 13:24:36.506272   59920 system_pods.go:89] "kube-apiserver-no-preload-20220725131741-44543" [d465aaac-22eb-46d8-875a-0262cfd269c0] Running
	I0725 13:24:36.506276   59920 system_pods.go:89] "kube-controller-manager-no-preload-20220725131741-44543" [f2309e0f-bca3-409e-a2be-fc7577409b36] Running
	I0725 13:24:36.506280   59920 system_pods.go:89] "kube-proxy-5xd86" [f7413e53-4981-4223-bae2-7a94b1c41206] Running
	I0725 13:24:36.506284   59920 system_pods.go:89] "kube-scheduler-no-preload-20220725131741-44543" [255f59a4-6f75-45a9-8640-2bced1f641fd] Running
	I0725 13:24:36.506291   59920 system_pods.go:89] "metrics-server-5c6f97fb75-wx6t6" [3b1612e1-6629-4a77-bc5c-96599e1fbede] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 13:24:36.506296   59920 system_pods.go:89] "storage-provisioner" [1e217a5f-3b4f-491b-8ff0-b385e6032f65] Running
	I0725 13:24:36.506302   59920 system_pods.go:126] duration metric: took 202.90215ms to wait for k8s-apps to be running ...
	I0725 13:24:36.506308   59920 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 13:24:36.506359   59920 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 13:24:36.516963   59920 system_svc.go:56] duration metric: took 10.650541ms WaitForService to wait for kubelet.
	I0725 13:24:36.516979   59920 kubeadm.go:572] duration metric: took 6.144559272s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0725 13:24:36.516994   59920 node_conditions.go:102] verifying NodePressure condition ...
	I0725 13:24:36.703723   59920 node_conditions.go:122] node storage ephemeral capacity is 107077304Ki
	I0725 13:24:36.703737   59920 node_conditions.go:123] node cpu capacity is 6
	I0725 13:24:36.703744   59920 node_conditions.go:105] duration metric: took 186.741628ms to run NodePressure ...
	I0725 13:24:36.703754   59920 start.go:216] waiting for startup goroutines ...
	I0725 13:24:36.737087   59920 start.go:506] kubectl: 1.24.1, cluster: 1.24.2 (minor skew: 0)
	I0725 13:24:36.758866   59920 out.go:177] * Done! kubectl is now configured to use "no-preload-20220725131741-44543" cluster and "default" namespace by default
	I0725 13:24:35.304650   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.064263654s)
	I0725 13:24:35.304758   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:24:35.304765   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:24:35.359741   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:24:35.359783   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:24:37.873389   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:24:37.918418   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:24:37.955388   60183 logs.go:274] 0 containers: []
	W0725 13:24:37.955407   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:24:37.955466   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:24:37.996813   60183 logs.go:274] 0 containers: []
	W0725 13:24:37.996824   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:24:37.996887   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:24:38.029638   60183 logs.go:274] 0 containers: []
	W0725 13:24:38.029653   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:24:38.029717   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:24:38.063668   60183 logs.go:274] 0 containers: []
	W0725 13:24:38.063681   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:24:38.063734   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:24:38.097181   60183 logs.go:274] 0 containers: []
	W0725 13:24:38.097193   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:24:38.097248   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:24:38.128322   60183 logs.go:274] 0 containers: []
	W0725 13:24:38.128337   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:24:38.128423   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:24:38.161589   60183 logs.go:274] 0 containers: []
	W0725 13:24:38.161605   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:24:38.161667   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:24:38.199476   60183 logs.go:274] 0 containers: []
	W0725 13:24:38.199488   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:24:38.199495   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:24:38.199501   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:24:38.263856   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:24:38.263867   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:24:38.263874   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:24:38.278755   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:24:38.278771   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:24:40.336830   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05798511s)
	I0725 13:24:40.336946   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:24:40.336958   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:24:40.385712   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:24:40.385733   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:24:42.900882   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:24:42.917988   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:24:42.955271   60183 logs.go:274] 0 containers: []
	W0725 13:24:42.955286   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:24:42.955386   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:24:42.990842   60183 logs.go:274] 0 containers: []
	W0725 13:24:42.990861   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:24:42.990927   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:24:43.024751   60183 logs.go:274] 0 containers: []
	W0725 13:24:43.024763   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:24:43.024824   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:24:43.061278   60183 logs.go:274] 0 containers: []
	W0725 13:24:43.061296   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:24:43.061361   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:24:43.091254   60183 logs.go:274] 0 containers: []
	W0725 13:24:43.091266   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:24:43.091323   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:24:43.121299   60183 logs.go:274] 0 containers: []
	W0725 13:24:43.121311   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:24:43.121385   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:24:43.150795   60183 logs.go:274] 0 containers: []
	W0725 13:24:43.150808   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:24:43.150899   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:24:43.184239   60183 logs.go:274] 0 containers: []
	W0725 13:24:43.184251   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:24:43.184258   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:24:43.184265   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:24:43.201029   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:24:43.201043   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:24:45.254970   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05385567s)
	I0725 13:24:45.255075   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:24:45.255081   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:24:45.294400   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:24:45.294415   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:24:45.306088   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:24:45.306101   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:24:45.358898   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:24:47.859143   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:24:47.918290   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:24:47.948745   60183 logs.go:274] 0 containers: []
	W0725 13:24:47.948757   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:24:47.948813   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:24:47.978054   60183 logs.go:274] 0 containers: []
	W0725 13:24:47.978065   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:24:47.978125   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:24:48.006969   60183 logs.go:274] 0 containers: []
	W0725 13:24:48.006982   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:24:48.007039   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:24:48.037417   60183 logs.go:274] 0 containers: []
	W0725 13:24:48.037433   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:24:48.037509   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:24:48.067050   60183 logs.go:274] 0 containers: []
	W0725 13:24:48.067063   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:24:48.067118   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:24:48.095883   60183 logs.go:274] 0 containers: []
	W0725 13:24:48.095896   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:24:48.095950   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:24:48.123973   60183 logs.go:274] 0 containers: []
	W0725 13:24:48.123985   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:24:48.124042   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:24:48.152316   60183 logs.go:274] 0 containers: []
	W0725 13:24:48.152332   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:24:48.152341   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:24:48.152349   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:24:48.194780   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:24:48.194796   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:24:48.207031   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:24:48.207044   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:24:48.260819   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:24:48.260831   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:24:48.260839   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:24:48.274383   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:24:48.274397   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:24:50.326332   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051862489s)
	I0725 13:24:52.827101   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:24:52.918437   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:24:52.951150   60183 logs.go:274] 0 containers: []
	W0725 13:24:52.951162   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:24:52.951220   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:24:52.985739   60183 logs.go:274] 0 containers: []
	W0725 13:24:52.985753   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:24:52.985815   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:24:53.016602   60183 logs.go:274] 0 containers: []
	W0725 13:24:53.016612   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:24:53.016659   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:24:53.046448   60183 logs.go:274] 0 containers: []
	W0725 13:24:53.046459   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:24:53.046517   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:24:53.078374   60183 logs.go:274] 0 containers: []
	W0725 13:24:53.078390   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:24:53.078466   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:24:53.123048   60183 logs.go:274] 0 containers: []
	W0725 13:24:53.123061   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:24:53.123123   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:24:53.154579   60183 logs.go:274] 0 containers: []
	W0725 13:24:53.154591   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:24:53.154646   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:24:53.195527   60183 logs.go:274] 0 containers: []
	W0725 13:24:53.195542   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:24:53.195551   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:24:53.195559   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:24:53.241474   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:24:53.241487   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:24:53.253883   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:24:53.253895   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:24:53.311986   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:24:53.312000   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:24:53.312008   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:24:53.327743   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:24:53.327764   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:24:55.393400   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.065560615s)
	I0725 13:24:57.895862   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:24:57.919394   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:24:57.951377   60183 logs.go:274] 0 containers: []
	W0725 13:24:57.951389   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:24:57.951444   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:24:57.979788   60183 logs.go:274] 0 containers: []
	W0725 13:24:57.979801   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:24:57.979860   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:24:58.008898   60183 logs.go:274] 0 containers: []
	W0725 13:24:58.008911   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:24:58.008967   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:24:58.037016   60183 logs.go:274] 0 containers: []
	W0725 13:24:58.037029   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:24:58.037089   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:24:58.066009   60183 logs.go:274] 0 containers: []
	W0725 13:24:58.066021   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:24:58.066079   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:24:58.093711   60183 logs.go:274] 0 containers: []
	W0725 13:24:58.093724   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:24:58.093788   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:24:58.123557   60183 logs.go:274] 0 containers: []
	W0725 13:24:58.123570   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:24:58.123626   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:24:58.151991   60183 logs.go:274] 0 containers: []
	W0725 13:24:58.152005   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:24:58.152011   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:24:58.152018   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:24:58.191731   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:24:58.191751   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:24:58.205346   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:24:58.205362   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:24:58.258841   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:24:58.258853   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:24:58.258859   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:24:58.272311   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:24:58.272323   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:25:00.327133   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054738791s)
	I0725 13:25:02.829132   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:25:02.920662   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:25:02.950188   60183 logs.go:274] 0 containers: []
	W0725 13:25:02.950201   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:25:02.950260   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:25:02.978580   60183 logs.go:274] 0 containers: []
	W0725 13:25:02.978592   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:25:02.978646   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:25:03.006563   60183 logs.go:274] 0 containers: []
	W0725 13:25:03.006576   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:25:03.006629   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:25:03.033788   60183 logs.go:274] 0 containers: []
	W0725 13:25:03.033801   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:25:03.033855   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:25:03.062179   60183 logs.go:274] 0 containers: []
	W0725 13:25:03.062191   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:25:03.062245   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:25:03.091169   60183 logs.go:274] 0 containers: []
	W0725 13:25:03.091189   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:25:03.091248   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:25:03.120134   60183 logs.go:274] 0 containers: []
	W0725 13:25:03.120147   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:25:03.120204   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:25:03.148569   60183 logs.go:274] 0 containers: []
	W0725 13:25:03.148582   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:25:03.148588   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:25:03.148595   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:25:05.206723   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.058055845s)
	I0725 13:25:05.206827   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:25:05.206834   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:25:05.244693   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:25:05.244707   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:25:05.256822   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:25:05.256833   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:25:05.308516   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:25:05.308531   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:25:05.308543   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:25:07.823907   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:25:07.918681   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:25:07.951167   60183 logs.go:274] 0 containers: []
	W0725 13:25:07.951179   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:25:07.951234   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:25:07.979414   60183 logs.go:274] 0 containers: []
	W0725 13:25:07.979427   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:25:07.979484   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:25:08.009108   60183 logs.go:274] 0 containers: []
	W0725 13:25:08.009120   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:25:08.009178   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:25:08.038053   60183 logs.go:274] 0 containers: []
	W0725 13:25:08.038070   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:25:08.038126   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:25:08.066112   60183 logs.go:274] 0 containers: []
	W0725 13:25:08.066124   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:25:08.066178   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:25:08.094804   60183 logs.go:274] 0 containers: []
	W0725 13:25:08.094817   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:25:08.094874   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:25:08.123943   60183 logs.go:274] 0 containers: []
	W0725 13:25:08.123955   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:25:08.124011   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:25:08.153447   60183 logs.go:274] 0 containers: []
	W0725 13:25:08.153460   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:25:08.153467   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:25:08.153474   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:25:10.205133   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051587517s)
	I0725 13:25:10.205247   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:25:10.205256   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:25:10.244085   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:25:10.244097   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:25:10.256079   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:25:10.256095   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:25:10.307417   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:25:10.307428   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:25:10.307435   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:25:12.823093   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:25:12.920941   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:25:12.952408   60183 logs.go:274] 0 containers: []
	W0725 13:25:12.952420   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:25:12.952476   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:25:12.981252   60183 logs.go:274] 0 containers: []
	W0725 13:25:12.981269   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:25:12.981333   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:25:13.010436   60183 logs.go:274] 0 containers: []
	W0725 13:25:13.010447   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:25:13.010511   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:25:13.038121   60183 logs.go:274] 0 containers: []
	W0725 13:25:13.038141   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:25:13.038208   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:25:13.068013   60183 logs.go:274] 0 containers: []
	W0725 13:25:13.068025   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:25:13.068084   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:25:13.098322   60183 logs.go:274] 0 containers: []
	W0725 13:25:13.098334   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:25:13.098389   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:25:13.128619   60183 logs.go:274] 0 containers: []
	W0725 13:25:13.128634   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:25:13.128701   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:25:13.157149   60183 logs.go:274] 0 containers: []
	W0725 13:25:13.157166   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:25:13.157179   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:25:13.157190   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:25:13.197722   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:25:13.197738   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:25:13.211125   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:25:13.211147   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:25:13.263333   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:25:13.263343   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:25:13.263350   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:25:13.276992   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:25:13.277004   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:25:15.333288   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056210609s)
	I0725 13:25:17.835729   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:25:17.921071   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:25:17.952398   60183 logs.go:274] 0 containers: []
	W0725 13:25:17.952411   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:25:17.952466   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:25:17.983512   60183 logs.go:274] 0 containers: []
	W0725 13:25:17.983524   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:25:17.983579   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:25:18.012155   60183 logs.go:274] 0 containers: []
	W0725 13:25:18.012166   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:25:18.012223   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:25:18.041437   60183 logs.go:274] 0 containers: []
	W0725 13:25:18.041450   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:25:18.041509   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:25:18.071064   60183 logs.go:274] 0 containers: []
	W0725 13:25:18.071076   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:25:18.071133   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:25:18.100563   60183 logs.go:274] 0 containers: []
	W0725 13:25:18.100576   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:25:18.100632   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:25:18.130038   60183 logs.go:274] 0 containers: []
	W0725 13:25:18.130065   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:25:18.130222   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:25:18.160243   60183 logs.go:274] 0 containers: []
	W0725 13:25:18.160255   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:25:18.160262   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:25:18.160270   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:25:20.214840   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054499324s)
	I0725 13:25:20.214949   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:25:20.214957   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:25:20.254381   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:25:20.254393   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:25:20.265948   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:25:20.265960   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:25:20.317418   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:25:20.317429   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:25:20.317435   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:25:22.833394   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:25:22.919747   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:25:22.949763   60183 logs.go:274] 0 containers: []
	W0725 13:25:22.949775   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:25:22.949833   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:25:22.979326   60183 logs.go:274] 0 containers: []
	W0725 13:25:22.979338   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:25:22.979394   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:25:23.008775   60183 logs.go:274] 0 containers: []
	W0725 13:25:23.008789   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:25:23.008847   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:25:23.038068   60183 logs.go:274] 0 containers: []
	W0725 13:25:23.038098   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:25:23.038155   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:25:23.066885   60183 logs.go:274] 0 containers: []
	W0725 13:25:23.066899   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:25:23.066948   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:25:23.095779   60183 logs.go:274] 0 containers: []
	W0725 13:25:23.095792   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:25:23.095847   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:25:23.124721   60183 logs.go:274] 0 containers: []
	W0725 13:25:23.124733   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:25:23.124795   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:25:23.154730   60183 logs.go:274] 0 containers: []
	W0725 13:25:23.154742   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:25:23.154749   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:25:23.154757   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:25:23.194256   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:25:23.194269   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:25:23.205440   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:25:23.205452   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:25:23.257296   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:25:23.257307   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:25:23.257314   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:25:23.270751   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:25:23.270762   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	
	* 
	* ==> Docker <==
	* -- Logs begin at Mon 2022-07-25 20:19:38 UTC, end at Mon 2022-07-25 20:25:28 UTC. --
	Jul 25 20:24:07 no-preload-20220725131741-44543 dockerd[553]: time="2022-07-25T20:24:07.915763306Z" level=info msg="ignoring event" container=39821950b38ab00f8752655ef0f245511a260096d951c92517af46d594389a99 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:24:07 no-preload-20220725131741-44543 dockerd[553]: time="2022-07-25T20:24:07.991136983Z" level=info msg="ignoring event" container=b536ea66b5cf820e5697e2f7a956759f8de8171373a91ac7f8fb78d930b1665a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:24:08 no-preload-20220725131741-44543 dockerd[553]: time="2022-07-25T20:24:08.066762622Z" level=info msg="ignoring event" container=5f7442ccad5f29940739ffb524f5d1c5ac234f05bb7caaf44533f5ca248d22ec module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:24:08 no-preload-20220725131741-44543 dockerd[553]: time="2022-07-25T20:24:08.145570047Z" level=info msg="ignoring event" container=4ad23b52fa422d7ee27ed9f392ca1df7a3b4c4369a4d6af920d72dd487720e03 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:24:08 no-preload-20220725131741-44543 dockerd[553]: time="2022-07-25T20:24:08.224301762Z" level=info msg="ignoring event" container=a54f833673b213d748f59b84c4014fcc2c1857b4b2bdb4bce11e4534adbb368e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:24:08 no-preload-20220725131741-44543 dockerd[553]: time="2022-07-25T20:24:08.357136065Z" level=info msg="ignoring event" container=ada121ccb73f129400150bbeacf725a40d3c99ba3cf415017d9927e1f1d5fb7a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:24:08 no-preload-20220725131741-44543 dockerd[553]: time="2022-07-25T20:24:08.427254251Z" level=info msg="ignoring event" container=b6941f2eb3b4c450155e7e8f61ee644ead12e565c10ff46cd996e35a8a9c7e84 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:24:08 no-preload-20220725131741-44543 dockerd[553]: time="2022-07-25T20:24:08.495439137Z" level=info msg="ignoring event" container=2796da19f858e294e8a5edaa43356910aa75d8569bc4fbb3fa118d0ba216b424 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:24:08 no-preload-20220725131741-44543 dockerd[553]: time="2022-07-25T20:24:08.561367074Z" level=info msg="ignoring event" container=10b9793cb27de25c597b7554509affe40111afc5aa4786e3aad713355b66a863 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:24:08 no-preload-20220725131741-44543 dockerd[553]: time="2022-07-25T20:24:08.627179417Z" level=info msg="ignoring event" container=d08baac5363c99937388bc7174e837634b28efbb152aaa221c2016900097d37e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:24:08 no-preload-20220725131741-44543 dockerd[553]: time="2022-07-25T20:24:08.713055426Z" level=info msg="ignoring event" container=28a04e4f22fcc8248bc629f2c06254b85833057d8ecab7352e7e0f98f8dfb7eb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:24:08 no-preload-20220725131741-44543 dockerd[553]: time="2022-07-25T20:24:08.778087802Z" level=info msg="ignoring event" container=71cc07c20f5d340e2ab9306aa7bef2e5f75a32c093d8b7f72d7fd869e7489723 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:24:08 no-preload-20220725131741-44543 dockerd[553]: time="2022-07-25T20:24:08.922328815Z" level=info msg="ignoring event" container=17b52d03f3f1ed558d9fe8ae12d2983b1fbc735c16d4a3b00ce8f9b5bb41d53e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:24:31 no-preload-20220725131741-44543 dockerd[553]: time="2022-07-25T20:24:31.692632872Z" level=info msg="ignoring event" container=168ab788ba30fb7d38d25041aa39f755f88b89fb9b2ed7e72ee38e0920941301 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:24:32 no-preload-20220725131741-44543 dockerd[553]: time="2022-07-25T20:24:32.852655184Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 20:24:32 no-preload-20220725131741-44543 dockerd[553]: time="2022-07-25T20:24:32.852999669Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 20:24:32 no-preload-20220725131741-44543 dockerd[553]: time="2022-07-25T20:24:32.854193004Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 20:24:33 no-preload-20220725131741-44543 dockerd[553]: time="2022-07-25T20:24:33.642896202Z" level=warning msg="reference for unknown type: " digest="sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3" remote="docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3"
	Jul 25 20:24:38 no-preload-20220725131741-44543 dockerd[553]: time="2022-07-25T20:24:38.858900200Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Jul 25 20:24:39 no-preload-20220725131741-44543 dockerd[553]: time="2022-07-25T20:24:39.065440747Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Jul 25 20:24:42 no-preload-20220725131741-44543 dockerd[553]: time="2022-07-25T20:24:42.362706207Z" level=info msg="ignoring event" container=7a4002c356f64147127b14e01a54fbf3ec3edd4fbd572a9d2f5aba18668469f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:24:43 no-preload-20220725131741-44543 dockerd[553]: time="2022-07-25T20:24:43.066702038Z" level=info msg="ignoring event" container=288472e032e6587fac9ce9e840fdd6e5207962b8fdef0b7a47ae511ad8dbc6da module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:24:46 no-preload-20220725131741-44543 dockerd[553]: time="2022-07-25T20:24:46.554301246Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 20:24:46 no-preload-20220725131741-44543 dockerd[553]: time="2022-07-25T20:24:46.554437296Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 20:24:46 no-preload-20220725131741-44543 dockerd[553]: time="2022-07-25T20:24:46.555667365Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	288472e032e65       a90209bb39e3d                                                                                    47 seconds ago       Exited              dashboard-metrics-scraper   1                   287eed0e4e275
	57bf001902d14       kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3   51 seconds ago       Running             kubernetes-dashboard        0                   682693f29916d
	4ff08338fbe99       6e38f40d628db                                                                                    57 seconds ago       Running             storage-provisioner         0                   c51426c5d4dd9
	c876d04d85189       a4ca41631cc7a                                                                                    58 seconds ago       Running             coredns                     0                   6c2a6be0f69c8
	70d956cdef7ed       a634548d10b03                                                                                    58 seconds ago       Running             kube-proxy                  0                   46b7162b59dd0
	4186c25f0122e       5d725196c1f47                                                                                    About a minute ago   Running             kube-scheduler              0                   874c936c512f4
	c5a0606c13196       d3377ffb7177c                                                                                    About a minute ago   Running             kube-apiserver              0                   80f3480abb767
	5fb6f0196adc0       34cdf99b1bb3b                                                                                    About a minute ago   Running             kube-controller-manager     0                   f80ed10fef334
	9db6ed85741b4       aebe758cef4cd                                                                                    About a minute ago   Running             etcd                        0                   03d53d1070abb
	
	* 
	* ==> coredns [c876d04d8518] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-20220725131741-44543
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-20220725131741-44543
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a5b59bcfc16aadb787d3d4f0635e06172b98dce6
	                    minikube.k8s.io/name=no-preload-20220725131741-44543
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_07_25T13_24_17_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 25 Jul 2022 20:24:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-20220725131741-44543
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 25 Jul 2022 20:25:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 25 Jul 2022 20:25:26 +0000   Mon, 25 Jul 2022 20:24:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 25 Jul 2022 20:25:26 +0000   Mon, 25 Jul 2022 20:24:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 25 Jul 2022 20:25:26 +0000   Mon, 25 Jul 2022 20:24:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 25 Jul 2022 20:25:26 +0000   Mon, 25 Jul 2022 20:24:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-20220725131741-44543
	Capacity:
	  cpu:                6
	  ephemeral-storage:  107077304Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  107077304Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 855c6c72c86b4657b3d8c3c774fd7e1d
	  System UUID:                d6898a0d-e94b-4236-8262-d80df4c73be9
	  Boot ID:                    f0b8333d-7eba-4eec-b9ed-6046a2426c0d
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.2
	  Kube-Proxy Version:         v1.24.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-r68q6                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     59s
	  kube-system                 etcd-no-preload-20220725131741-44543                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         72s
	  kube-system                 kube-apiserver-no-preload-20220725131741-44543             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 kube-controller-manager-no-preload-20220725131741-44543    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  kube-system                 kube-proxy-5xd86                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         60s
	  kube-system                 kube-scheduler-no-preload-20220725131741-44543             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 metrics-server-5c6f97fb75-wx6t6                            100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         57s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	  kubernetes-dashboard        dashboard-metrics-scraper-dffd48c4c-zqmnn                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	  kubernetes-dashboard        kubernetes-dashboard-5fd5574d9f-mhjvb                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 57s                kube-proxy       
	  Normal  Starting                 79s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  79s (x5 over 79s)  kubelet          Node no-preload-20220725131741-44543 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    79s (x5 over 79s)  kubelet          Node no-preload-20220725131741-44543 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     79s (x4 over 79s)  kubelet          Node no-preload-20220725131741-44543 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  79s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 72s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  72s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  72s                kubelet          Node no-preload-20220725131741-44543 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    72s                kubelet          Node no-preload-20220725131741-44543 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     72s                kubelet          Node no-preload-20220725131741-44543 status is now: NodeHasSufficientPID
	  Normal  NodeReady                72s                kubelet          Node no-preload-20220725131741-44543 status is now: NodeReady
	  Normal  RegisteredNode           60s                node-controller  Node no-preload-20220725131741-44543 event: Registered Node no-preload-20220725131741-44543 in Controller
	  Normal  Starting                 3s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3s                 kubelet          Node no-preload-20220725131741-44543 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s                 kubelet          Node no-preload-20220725131741-44543 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s                 kubelet          Node no-preload-20220725131741-44543 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3s                 kubelet          Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [9db6ed85741b] <==
	* {"level":"info","ts":"2022-07-25T20:24:11.345Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2022-07-25T20:24:11.345Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2022-07-25T20:24:11.348Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-07-25T20:24:11.348Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-25T20:24:11.349Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-25T20:24:11.349Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-07-25T20:24:11.349Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-07-25T20:24:12.339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2022-07-25T20:24:12.339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-07-25T20:24:12.339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2022-07-25T20:24:12.339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2022-07-25T20:24:12.339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-07-25T20:24:12.339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2022-07-25T20:24:12.339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-07-25T20:24:12.339Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T20:24:12.340Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T20:24:12.340Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T20:24:12.340Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:no-preload-20220725131741-44543 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-25T20:24:12.340Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T20:24:12.340Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-25T20:24:12.340Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-25T20:24:12.341Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-07-25T20:24:12.341Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2022-07-25T20:24:12.342Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-25T20:24:12.342Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  20:25:29 up  1:06,  0 users,  load average: 2.26, 1.47, 1.36
	Linux no-preload-20220725131741-44543 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [c5a0606c1319] <==
	* I0725 20:24:15.208397       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0725 20:24:15.597978       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0725 20:24:15.624251       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0725 20:24:15.720478       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0725 20:24:15.724256       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0725 20:24:15.724940       1 controller.go:611] quota admission added evaluator for: endpoints
	I0725 20:24:15.727513       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0725 20:24:16.349393       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0725 20:24:17.410900       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0725 20:24:17.437007       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0725 20:24:17.444848       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0725 20:24:17.507884       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0725 20:24:29.783879       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0725 20:24:29.883999       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0725 20:24:31.698449       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0725 20:24:32.207744       1 alloc.go:327] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.99.4.98]
	I0725 20:24:32.592497       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.104.147.56]
	I0725 20:24:32.604607       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.111.173.15]
	W0725 20:24:33.094059       1 handler_proxy.go:102] no RequestInfo found in the context
	W0725 20:24:33.094066       1 handler_proxy.go:102] no RequestInfo found in the context
	E0725 20:24:33.094083       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0725 20:24:33.094088       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0725 20:24:33.094100       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0725 20:24:33.095317       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [5fb6f0196adc] <==
	* I0725 20:24:30.204059       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-6d4b75cb6d-qfmxw"
	I0725 20:24:32.089801       1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-5c6f97fb75 to 1"
	I0725 20:24:32.094114       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-5c6f97fb75-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0725 20:24:32.097509       1 replica_set.go:550] sync "kube-system/metrics-server-5c6f97fb75" failed with pods "metrics-server-5c6f97fb75-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0725 20:24:32.101216       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c6f97fb75-wx6t6"
	I0725 20:24:32.316205       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-dffd48c4c to 1"
	I0725 20:24:32.324575       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0725 20:24:32.327617       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0725 20:24:32.382342       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-5fd5574d9f to 1"
	E0725 20:24:32.383914       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0725 20:24:32.384181       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0725 20:24:32.386810       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0725 20:24:32.388801       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0725 20:24:32.388840       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0725 20:24:32.392335       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0725 20:24:32.397081       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0725 20:24:32.397094       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0725 20:24:32.399687       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0725 20:24:32.399750       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0725 20:24:32.402460       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0725 20:24:32.402512       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0725 20:24:32.450188       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-dffd48c4c-zqmnn"
	I0725 20:24:32.450221       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5fd5574d9f-mhjvb"
	E0725 20:25:26.285965       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0725 20:25:26.350700       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [70d956cdef7e] <==
	* I0725 20:24:31.513235       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0725 20:24:31.513294       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0725 20:24:31.513316       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0725 20:24:31.694845       1 server_others.go:206] "Using iptables Proxier"
	I0725 20:24:31.694917       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0725 20:24:31.694929       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0725 20:24:31.694955       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0725 20:24:31.694986       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0725 20:24:31.695315       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0725 20:24:31.695514       1 server.go:661] "Version info" version="v1.24.2"
	I0725 20:24:31.695531       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 20:24:31.696254       1 config.go:317] "Starting service config controller"
	I0725 20:24:31.696285       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0725 20:24:31.696299       1 config.go:226] "Starting endpoint slice config controller"
	I0725 20:24:31.696302       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0725 20:24:31.696635       1 config.go:444] "Starting node config controller"
	I0725 20:24:31.696661       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0725 20:24:31.796425       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0725 20:24:31.796478       1 shared_informer.go:262] Caches are synced for service config
	I0725 20:24:31.796814       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [4186c25f0122] <==
	* W0725 20:24:15.147386       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0725 20:24:15.147437       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0725 20:24:15.182529       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0725 20:24:15.182615       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0725 20:24:15.204059       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0725 20:24:15.204096       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0725 20:24:15.214008       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0725 20:24:15.214047       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0725 20:24:15.217764       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0725 20:24:15.217806       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0725 20:24:15.290192       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0725 20:24:15.290327       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0725 20:24:15.323830       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0725 20:24:15.323872       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0725 20:24:15.356618       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0725 20:24:15.356657       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0725 20:24:15.388624       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0725 20:24:15.388661       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0725 20:24:15.510560       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0725 20:24:15.510647       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0725 20:24:15.513730       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0725 20:24:15.513792       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0725 20:24:15.514218       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0725 20:24:15.514250       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0725 20:24:18.114400       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2022-07-25 20:19:38 UTC, end at Mon 2022-07-25 20:25:30 UTC. --
	Jul 25 20:25:27 no-preload-20220725131741-44543 kubelet[9627]: I0725 20:25:27.703952    9627 topology_manager.go:200] "Topology Admit Handler"
	Jul 25 20:25:27 no-preload-20220725131741-44543 kubelet[9627]: I0725 20:25:27.746569    9627 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8clk5\" (UniqueName: \"kubernetes.io/projected/1e217a5f-3b4f-491b-8ff0-b385e6032f65-kube-api-access-8clk5\") pod \"storage-provisioner\" (UID: \"1e217a5f-3b4f-491b-8ff0-b385e6032f65\") " pod="kube-system/storage-provisioner"
	Jul 25 20:25:27 no-preload-20220725131741-44543 kubelet[9627]: I0725 20:25:27.746614    9627 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7413e53-4981-4223-bae2-7a94b1c41206-lib-modules\") pod \"kube-proxy-5xd86\" (UID: \"f7413e53-4981-4223-bae2-7a94b1c41206\") " pod="kube-system/kube-proxy-5xd86"
	Jul 25 20:25:27 no-preload-20220725131741-44543 kubelet[9627]: I0725 20:25:27.746643    9627 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpp9z\" (UniqueName: \"kubernetes.io/projected/3b1612e1-6629-4a77-bc5c-96599e1fbede-kube-api-access-lpp9z\") pod \"metrics-server-5c6f97fb75-wx6t6\" (UID: \"3b1612e1-6629-4a77-bc5c-96599e1fbede\") " pod="kube-system/metrics-server-5c6f97fb75-wx6t6"
	Jul 25 20:25:27 no-preload-20220725131741-44543 kubelet[9627]: I0725 20:25:27.746665    9627 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d2qr\" (UniqueName: \"kubernetes.io/projected/7dd09b51-1f7c-4726-8129-946ccb611d60-kube-api-access-4d2qr\") pod \"dashboard-metrics-scraper-dffd48c4c-zqmnn\" (UID: \"7dd09b51-1f7c-4726-8129-946ccb611d60\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-zqmnn"
	Jul 25 20:25:27 no-preload-20220725131741-44543 kubelet[9627]: I0725 20:25:27.746681    9627 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/623858be-728c-4b5e-8e31-b5713757b87c-config-volume\") pod \"coredns-6d4b75cb6d-r68q6\" (UID: \"623858be-728c-4b5e-8e31-b5713757b87c\") " pod="kube-system/coredns-6d4b75cb6d-r68q6"
	Jul 25 20:25:27 no-preload-20220725131741-44543 kubelet[9627]: I0725 20:25:27.746695    9627 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkc4l\" (UniqueName: \"kubernetes.io/projected/623858be-728c-4b5e-8e31-b5713757b87c-kube-api-access-hkc4l\") pod \"coredns-6d4b75cb6d-r68q6\" (UID: \"623858be-728c-4b5e-8e31-b5713757b87c\") " pod="kube-system/coredns-6d4b75cb6d-r68q6"
	Jul 25 20:25:27 no-preload-20220725131741-44543 kubelet[9627]: I0725 20:25:27.746753    9627 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxrt5\" (UniqueName: \"kubernetes.io/projected/f7413e53-4981-4223-bae2-7a94b1c41206-kube-api-access-jxrt5\") pod \"kube-proxy-5xd86\" (UID: \"f7413e53-4981-4223-bae2-7a94b1c41206\") " pod="kube-system/kube-proxy-5xd86"
	Jul 25 20:25:27 no-preload-20220725131741-44543 kubelet[9627]: I0725 20:25:27.746862    9627 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1e217a5f-3b4f-491b-8ff0-b385e6032f65-tmp\") pod \"storage-provisioner\" (UID: \"1e217a5f-3b4f-491b-8ff0-b385e6032f65\") " pod="kube-system/storage-provisioner"
	Jul 25 20:25:27 no-preload-20220725131741-44543 kubelet[9627]: I0725 20:25:27.746919    9627 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b3d92a3d-a9e6-4310-865a-8f9cb6d82035-tmp-volume\") pod \"kubernetes-dashboard-5fd5574d9f-mhjvb\" (UID: \"b3d92a3d-a9e6-4310-865a-8f9cb6d82035\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-mhjvb"
	Jul 25 20:25:27 no-preload-20220725131741-44543 kubelet[9627]: I0725 20:25:27.746990    9627 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3b1612e1-6629-4a77-bc5c-96599e1fbede-tmp-dir\") pod \"metrics-server-5c6f97fb75-wx6t6\" (UID: \"3b1612e1-6629-4a77-bc5c-96599e1fbede\") " pod="kube-system/metrics-server-5c6f97fb75-wx6t6"
	Jul 25 20:25:27 no-preload-20220725131741-44543 kubelet[9627]: I0725 20:25:27.747046    9627 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7413e53-4981-4223-bae2-7a94b1c41206-xtables-lock\") pod \"kube-proxy-5xd86\" (UID: \"f7413e53-4981-4223-bae2-7a94b1c41206\") " pod="kube-system/kube-proxy-5xd86"
	Jul 25 20:25:27 no-preload-20220725131741-44543 kubelet[9627]: I0725 20:25:27.747133    9627 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7dd09b51-1f7c-4726-8129-946ccb611d60-tmp-volume\") pod \"dashboard-metrics-scraper-dffd48c4c-zqmnn\" (UID: \"7dd09b51-1f7c-4726-8129-946ccb611d60\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-zqmnn"
	Jul 25 20:25:27 no-preload-20220725131741-44543 kubelet[9627]: I0725 20:25:27.747200    9627 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgs5z\" (UniqueName: \"kubernetes.io/projected/b3d92a3d-a9e6-4310-865a-8f9cb6d82035-kube-api-access-rgs5z\") pod \"kubernetes-dashboard-5fd5574d9f-mhjvb\" (UID: \"b3d92a3d-a9e6-4310-865a-8f9cb6d82035\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-mhjvb"
	Jul 25 20:25:27 no-preload-20220725131741-44543 kubelet[9627]: I0725 20:25:27.747278    9627 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f7413e53-4981-4223-bae2-7a94b1c41206-kube-proxy\") pod \"kube-proxy-5xd86\" (UID: \"f7413e53-4981-4223-bae2-7a94b1c41206\") " pod="kube-system/kube-proxy-5xd86"
	Jul 25 20:25:27 no-preload-20220725131741-44543 kubelet[9627]: I0725 20:25:27.747291    9627 reconciler.go:157] "Reconciler: start to sync state"
	Jul 25 20:25:28 no-preload-20220725131741-44543 kubelet[9627]: I0725 20:25:28.900287    9627 request.go:601] Waited for 1.133917224s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Jul 25 20:25:28 no-preload-20220725131741-44543 kubelet[9627]: E0725 20:25:28.905615    9627 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-no-preload-20220725131741-44543\" already exists" pod="kube-system/kube-controller-manager-no-preload-20220725131741-44543"
	Jul 25 20:25:29 no-preload-20220725131741-44543 kubelet[9627]: E0725 20:25:29.103966    9627 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-no-preload-20220725131741-44543\" already exists" pod="kube-system/kube-scheduler-no-preload-20220725131741-44543"
	Jul 25 20:25:29 no-preload-20220725131741-44543 kubelet[9627]: E0725 20:25:29.320710    9627 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-no-preload-20220725131741-44543\" already exists" pod="kube-system/etcd-no-preload-20220725131741-44543"
	Jul 25 20:25:29 no-preload-20220725131741-44543 kubelet[9627]: E0725 20:25:29.535278    9627 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-no-preload-20220725131741-44543\" already exists" pod="kube-system/kube-apiserver-no-preload-20220725131741-44543"
	Jul 25 20:25:30 no-preload-20220725131741-44543 kubelet[9627]: E0725 20:25:30.244050    9627 remote_image.go:218] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jul 25 20:25:30 no-preload-20220725131741-44543 kubelet[9627]: E0725 20:25:30.244127    9627 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jul 25 20:25:30 no-preload-20220725131741-44543 kubelet[9627]: E0725 20:25:30.244269    9627 kuberuntime_manager.go:905] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-lpp9z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHa
ndler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:
[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-5c6f97fb75-wx6t6_kube-system(3b1612e1-6629-4a77-bc5c-96599e1fbede): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Jul 25 20:25:30 no-preload-20220725131741-44543 kubelet[9627]: E0725 20:25:30.244303    9627 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host\"" pod="kube-system/metrics-server-5c6f97fb75-wx6t6" podUID=3b1612e1-6629-4a77-bc5c-96599e1fbede
	
	* 
	* ==> kubernetes-dashboard [57bf001902d1] <==
	* 2022/07/25 20:24:38 Starting overwatch
	2022/07/25 20:24:38 Using namespace: kubernetes-dashboard
	2022/07/25 20:24:38 Using in-cluster config to connect to apiserver
	2022/07/25 20:24:38 Using secret token for csrf signing
	2022/07/25 20:24:38 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/07/25 20:24:38 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/07/25 20:24:38 Successful initial request to the apiserver, version: v1.24.2
	2022/07/25 20:24:38 Generating JWE encryption key
	2022/07/25 20:24:38 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/07/25 20:24:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/07/25 20:24:38 Initializing JWE encryption key from synchronized object
	2022/07/25 20:24:38 Creating in-cluster Sidecar client
	2022/07/25 20:24:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/07/25 20:24:38 Serving insecurely on HTTP port: 9090
	2022/07/25 20:25:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	* 
	* ==> storage-provisioner [4ff08338fbe9] <==
	* I0725 20:24:32.692186       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0725 20:24:32.701939       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0725 20:24:32.702010       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0725 20:24:32.708071       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0725 20:24:32.708219       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-20220725131741-44543_d2512faa-681b-4a42-bd10-28354c4a4537!
	I0725 20:24:32.708978       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a74a2ed0-d980-424c-9fcd-210562437f90", APIVersion:"v1", ResourceVersion:"487", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-20220725131741-44543_d2512faa-681b-4a42-bd10-28354c4a4537 became leader
	I0725 20:24:32.809329       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-20220725131741-44543_d2512faa-681b-4a42-bd10-28354c4a4537!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220725131741-44543 -n no-preload-20220725131741-44543
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-20220725131741-44543 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-5c6f97fb75-wx6t6
helpers_test.go:272: ======> post-mortem[TestStartStop/group/no-preload/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context no-preload-20220725131741-44543 describe pod metrics-server-5c6f97fb75-wx6t6
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context no-preload-20220725131741-44543 describe pod metrics-server-5c6f97fb75-wx6t6: exit status 1 (288.180506ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5c6f97fb75-wx6t6" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context no-preload-20220725131741-44543 describe pod metrics-server-5c6f97fb75-wx6t6: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220725131741-44543
helpers_test.go:235: (dbg) docker inspect no-preload-20220725131741-44543:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ad80487f975914209c4d3689585546ccd655439372777ebf28416588d53520ef",
	        "Created": "2022-07-25T20:17:43.927918712Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 243415,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-25T20:19:38.654487919Z",
	            "FinishedAt": "2022-07-25T20:19:36.710324647Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/ad80487f975914209c4d3689585546ccd655439372777ebf28416588d53520ef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ad80487f975914209c4d3689585546ccd655439372777ebf28416588d53520ef/hostname",
	        "HostsPath": "/var/lib/docker/containers/ad80487f975914209c4d3689585546ccd655439372777ebf28416588d53520ef/hosts",
	        "LogPath": "/var/lib/docker/containers/ad80487f975914209c4d3689585546ccd655439372777ebf28416588d53520ef/ad80487f975914209c4d3689585546ccd655439372777ebf28416588d53520ef-json.log",
	        "Name": "/no-preload-20220725131741-44543",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-20220725131741-44543:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20220725131741-44543",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3de36809935d1c99de1598441380a1830ad6676010e517f6f9f08eac27bb9c93-init/diff:/var/lib/docker/overlay2/d54b85ebfe416686b2558e2807bd5eef62d8c3964c35bbc22b4965093a7c57f4/diff:/var/lib/docker/overlay2/5cb3ff6e7921802d23fd6cba2490df2c36857ad3d97eedcf6e537d1527a37f60/diff:/var/lib/docker/overlay2/e9074b1b2294cc5a00d3237646a71c4d30d89748eadbae26dd532d6a475c9851/diff:/var/lib/docker/overlay2/df8929c4859ecf2c8807040b0174a3f0dfa8da01532ea961cd7f4f2f0ccfbfc2/diff:/var/lib/docker/overlay2/177329168f9a1043ce94a2b7b32958779e8f685e85abf8e4c335d03d23295b26/diff:/var/lib/docker/overlay2/9eef19133d2d5e0111523234481658c1f73d7552db59842ad9ca687debda8fb3/diff:/var/lib/docker/overlay2/5c3a4cdf915d1e65e3003feefb9b9e649b432658e093e04971d5cb79e610dba9/diff:/var/lib/docker/overlay2/d1aeff9fecf983db994270b4acf2a2310c790c64d42c58d436aec3238f88b36c/diff:/var/lib/docker/overlay2/580dc19745fb3c1352983857ac5a555954893045702e7867e651e15a2d6d8732/diff:/var/lib/docker/overlay2/6c77b3
2028ad3ef74c2faf89b5080529c0391907ee9c1d7bbc00c25a5340238b/diff:/var/lib/docker/overlay2/e9e20374d05015c7a4f19eb8e14e663122cd2aef66d761bedfdef138551230b8/diff:/var/lib/docker/overlay2/cd6d64933d189089a7aaf08d7c7db50b89cc981d03b8272e4fdedb65495c3e4c/diff:/var/lib/docker/overlay2/768244a14ae888bb2a08747182035c24bd2a6a7a67f1ddae7b4e60efccb01339/diff:/var/lib/docker/overlay2/f90a95060678580a1a90ee9a563996249f7606bffcd06b680ef15234b56ab4a6/diff:/var/lib/docker/overlay2/49310159b9be5ac9bfa1dca18d816535ceb12e228963adafcf71f9d7d52d95ec/diff:/var/lib/docker/overlay2/cef5c40c08cddf01fe5ea252f4a28586c0bd1296ddc38fc37fc7e34af83c3c30/diff:/var/lib/docker/overlay2/b31bdcb9b1a69c5230400a4e1f5650c1cc8f9e27e673ff43708f9a450106733e/diff:/var/lib/docker/overlay2/899516301b3ffc21aee7cb9984f97d944fe5af0f2cd0bc5a5895bf187e4bff72/diff:/var/lib/docker/overlay2/f1bd0d2fc2ce458c0b6364dde56610c3d26bf8dc2cca2ae4e34a46f173f44595/diff:/var/lib/docker/overlay2/9f0da14f9c19d5a719554996b34f3ef80a0378921181aaf89ca41339ccc5bee3/diff:/var/lib/d
ocker/overlay2/4d6e8c40f7c65cbcfa827b62273eb37d2a0bb0407ee161b80523f11c157a52d4/diff:/var/lib/docker/overlay2/434355bebe182fb5c97b07ad8cf7a59b5474f8bc71379ff4f9690ffe31837e63/diff:/var/lib/docker/overlay2/128fb199261eb4fea9e79111861223ddb0d84438454cb7c0b9a61cc73d0796c3/diff:/var/lib/docker/overlay2/11c9cb7e5f15ac43115cb739da1135a53e08dc94ee1da27441e1d6c79eb73e62/diff:/var/lib/docker/overlay2/5e6bf225209b025da3db5a2d8862c3a0ee64417fa809d026e010644fc6619b48/diff:/var/lib/docker/overlay2/265a2d12f86abb8a607a06ec6f225186dfe622f7e23cbbf146a2ceffc7bc9bdc/diff:/var/lib/docker/overlay2/e9140b8aea381db0faf833cd2d12ff40267febe63f7c6bc2fa27384dbadf89bc/diff:/var/lib/docker/overlay2/07d7d0ab38cb08c95279e0706a65a6a4aeff8b5e72d559f8bc3dfcc0895654d6/diff:/var/lib/docker/overlay2/3a3d8346fb066863283ad39cf7f43615d7a1e99dc12aa7e8892d85d1c550bc63/diff:/var/lib/docker/overlay2/e5317b212815c832ebd2451ddd6e1f6922f350c5c1651ccfef7c5a8f34b7c1e3/diff:/var/lib/docker/overlay2/3bd7d98f0f0bf7ad2e29344a6ab4ca3211616e390da35077e76565c2074
df852/diff:/var/lib/docker/overlay2/ff63d2045cce1bb51b201079bba9d2b2a666b7331699b9407801b2fc00792bb3/diff:/var/lib/docker/overlay2/545bdf3f77f20e3db68875eda0a8829facb605119a630956ab79323688a3562a/diff:/var/lib/docker/overlay2/8f9b40124ec5ada59184358938542e028ca1e234ef1f6bce1816becf6ec8354e/diff:/var/lib/docker/overlay2/71b919cd2ece4207294cb577adab54fdba87cd9f7fdca1a53d6f376f87f11151/diff:/var/lib/docker/overlay2/b2d4a34fe28bd395d9b4ba0120173fc9df90c36a8a60a309ec088eca8930768c/diff:/var/lib/docker/overlay2/e986f180cbc34391c1ea6665283859b4c3c3061156ea1ff7196ffd869741e591/diff:/var/lib/docker/overlay2/cb98df203d42ea558f11fe84300b687cb65e6f3401b377a4466c9c8ef276ec59/diff:/var/lib/docker/overlay2/6371f9c0528fb1fb4a17bfcf3a34877fdd6d43dca1b5f06aff998d43d2010c46/diff:/var/lib/docker/overlay2/79ceef8e89c4094715c05bafb8c1427d09fb4a99f71f1a6186299303af352c98/diff:/var/lib/docker/overlay2/e034e76eeaff4e68d5158ffe54b55d122124ad82998be242fa08bec3010a0e64/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3de36809935d1c99de1598441380a1830ad6676010e517f6f9f08eac27bb9c93/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3de36809935d1c99de1598441380a1830ad6676010e517f6f9f08eac27bb9c93/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3de36809935d1c99de1598441380a1830ad6676010e517f6f9f08eac27bb9c93/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-20220725131741-44543",
	                "Source": "/var/lib/docker/volumes/no-preload-20220725131741-44543/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20220725131741-44543",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20220725131741-44543",
	                "name.minikube.sigs.k8s.io": "no-preload-20220725131741-44543",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ff3367752dba2d7f7888a1b6e610f38fe877e3282dc489be00c3af4ffc717d9a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58795"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58796"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58797"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58798"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58799"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/ff3367752dba",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20220725131741-44543": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ad80487f9759",
	                        "no-preload-20220725131741-44543"
	                    ],
	                    "NetworkID": "b1ac5d8a333e627253e80ab8f076639f114a35093181717e468951da733821e1",
	                    "EndpointID": "c13f7a6156c394b1261e1da28c4b37be6f47094fa09a3a51888abeab0903f33f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220725131741-44543 -n no-preload-20220725131741-44543
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p no-preload-20220725131741-44543 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p no-preload-20220725131741-44543 logs -n 25: (2.703766827s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |                 Profile                 |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p false-20220725125922-44543                     | false-20220725125922-44543              | jenkins | v1.26.0 | 25 Jul 22 13:14 PDT | 25 Jul 22 13:14 PDT |
	|         | pgrep -a kubelet                                  |                                         |         |         |                     |                     |
	| delete  | -p calico-20220725125923-44543                    | calico-20220725125923-44543             | jenkins | v1.26.0 | 25 Jul 22 13:14 PDT | 25 Jul 22 13:14 PDT |
	| start   | -p bridge-20220725125922-44543                    | bridge-20220725125922-44543             | jenkins | v1.26.0 | 25 Jul 22 13:14 PDT | 25 Jul 22 13:15 PDT |
	|         | --memory=2048                                     |                                         |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                                         |         |         |                     |                     |
	|         | --wait-timeout=5m --cni=bridge                    |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	| delete  | -p false-20220725125922-44543                     | false-20220725125922-44543              | jenkins | v1.26.0 | 25 Jul 22 13:14 PDT | 25 Jul 22 13:14 PDT |
	| start   | -p                                                | enable-default-cni-20220725125922-44543 | jenkins | v1.26.0 | 25 Jul 22 13:14 PDT | 25 Jul 22 13:15 PDT |
	|         | enable-default-cni-20220725125922-44543           |                                         |         |         |                     |                     |
	|         | --memory=2048 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                         |         |         |                     |                     |
	|         | --enable-default-cni=true                         |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	| ssh     | -p                                                | enable-default-cni-20220725125922-44543 | jenkins | v1.26.0 | 25 Jul 22 13:15 PDT | 25 Jul 22 13:15 PDT |
	|         | enable-default-cni-20220725125922-44543           |                                         |         |         |                     |                     |
	|         | pgrep -a kubelet                                  |                                         |         |         |                     |                     |
	| ssh     | -p bridge-20220725125922-44543                    | bridge-20220725125922-44543             | jenkins | v1.26.0 | 25 Jul 22 13:15 PDT | 25 Jul 22 13:15 PDT |
	|         | pgrep -a kubelet                                  |                                         |         |         |                     |                     |
	| delete  | -p bridge-20220725125922-44543                    | bridge-20220725125922-44543             | jenkins | v1.26.0 | 25 Jul 22 13:15 PDT | 25 Jul 22 13:15 PDT |
	| start   | -p                                                | kubenet-20220725125922-44543            | jenkins | v1.26.0 | 25 Jul 22 13:15 PDT | 25 Jul 22 13:16 PDT |
	|         | kubenet-20220725125922-44543                      |                                         |         |         |                     |                     |
	|         | --memory=2048                                     |                                         |         |         |                     |                     |
	|         | --alsologtostderr                                 |                                         |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                         |         |         |                     |                     |
	|         | --network-plugin=kubenet                          |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	| delete  | -p                                                | enable-default-cni-20220725125922-44543 | jenkins | v1.26.0 | 25 Jul 22 13:16 PDT | 25 Jul 22 13:16 PDT |
	|         | enable-default-cni-20220725125922-44543           |                                         |         |         |                     |                     |
	| start   | -p                                                | old-k8s-version-20220725131610-44543    | jenkins | v1.26.0 | 25 Jul 22 13:16 PDT |                     |
	|         | old-k8s-version-20220725131610-44543              |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                         |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                         |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                         |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                         |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                         |         |         |                     |                     |
	| ssh     | -p                                                | kubenet-20220725125922-44543            | jenkins | v1.26.0 | 25 Jul 22 13:16 PDT | 25 Jul 22 13:16 PDT |
	|         | kubenet-20220725125922-44543                      |                                         |         |         |                     |                     |
	|         | pgrep -a kubelet                                  |                                         |         |         |                     |                     |
	| delete  | -p                                                | kubenet-20220725125922-44543            | jenkins | v1.26.0 | 25 Jul 22 13:17 PDT | 25 Jul 22 13:17 PDT |
	|         | kubenet-20220725125922-44543                      |                                         |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220725131741-44543         | jenkins | v1.26.0 | 25 Jul 22 13:17 PDT | 25 Jul 22 13:19 PDT |
	|         | no-preload-20220725131741-44543                   |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |                                         |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220725131741-44543         | jenkins | v1.26.0 | 25 Jul 22 13:19 PDT | 25 Jul 22 13:19 PDT |
	|         | no-preload-20220725131741-44543                   |                                         |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                         |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                         |         |         |                     |                     |
	| stop    | -p                                                | no-preload-20220725131741-44543         | jenkins | v1.26.0 | 25 Jul 22 13:19 PDT | 25 Jul 22 13:19 PDT |
	|         | no-preload-20220725131741-44543                   |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                         |         |         |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220725131741-44543         | jenkins | v1.26.0 | 25 Jul 22 13:19 PDT | 25 Jul 22 13:19 PDT |
	|         | no-preload-20220725131741-44543                   |                                         |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                         |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220725131741-44543         | jenkins | v1.26.0 | 25 Jul 22 13:19 PDT | 25 Jul 22 13:24 PDT |
	|         | no-preload-20220725131741-44543                   |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |                                         |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | old-k8s-version-20220725131610-44543    | jenkins | v1.26.0 | 25 Jul 22 13:20 PDT |                     |
	|         | old-k8s-version-20220725131610-44543              |                                         |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                         |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                         |         |         |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220725131610-44543    | jenkins | v1.26.0 | 25 Jul 22 13:21 PDT | 25 Jul 22 13:21 PDT |
	|         | old-k8s-version-20220725131610-44543              |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                         |         |         |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220725131610-44543    | jenkins | v1.26.0 | 25 Jul 22 13:21 PDT | 25 Jul 22 13:21 PDT |
	|         | old-k8s-version-20220725131610-44543              |                                         |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                         |         |         |                     |                     |
	| start   | -p                                                | old-k8s-version-20220725131610-44543    | jenkins | v1.26.0 | 25 Jul 22 13:21 PDT |                     |
	|         | old-k8s-version-20220725131610-44543              |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                         |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                         |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                         |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                         |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                         |         |         |                     |                     |
	| ssh     | -p                                                | no-preload-20220725131741-44543         | jenkins | v1.26.0 | 25 Jul 22 13:24 PDT | 25 Jul 22 13:24 PDT |
	|         | no-preload-20220725131741-44543                   |                                         |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                         |         |         |                     |                     |
	| pause   | -p                                                | no-preload-20220725131741-44543         | jenkins | v1.26.0 | 25 Jul 22 13:24 PDT | 25 Jul 22 13:24 PDT |
	|         | no-preload-20220725131741-44543                   |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                         |         |         |                     |                     |
	| unpause | -p                                                | no-preload-20220725131741-44543         | jenkins | v1.26.0 | 25 Jul 22 13:25 PDT | 25 Jul 22 13:25 PDT |
	|         | no-preload-20220725131741-44543                   |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                         |         |         |                     |                     |
	|---------|---------------------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/25 13:21:53
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 13:21:53.673919   60183 out.go:296] Setting OutFile to fd 1 ...
	I0725 13:21:53.674091   60183 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 13:21:53.674097   60183 out.go:309] Setting ErrFile to fd 2...
	I0725 13:21:53.674101   60183 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 13:21:53.674202   60183 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/bin
	I0725 13:21:53.674680   60183 out.go:303] Setting JSON to false
	I0725 13:21:53.690728   60183 start.go:115] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":15685,"bootTime":1658764828,"procs":352,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0725 13:21:53.690811   60183 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0725 13:21:53.712538   60183 out.go:177] * [old-k8s-version-20220725131610-44543] minikube v1.26.0 on Darwin 12.4
	I0725 13:21:53.734468   60183 notify.go:193] Checking for updates...
	I0725 13:21:53.755405   60183 out.go:177]   - MINIKUBE_LOCATION=14555
	I0725 13:21:53.777462   60183 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	I0725 13:21:53.798424   60183 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 13:21:53.819416   60183 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 13:21:53.840488   60183 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube
	I0725 13:21:53.862141   60183 config.go:178] Loaded profile config "old-k8s-version-20220725131610-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0725 13:21:53.884290   60183 out.go:177] * Kubernetes 1.24.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.24.2
	I0725 13:21:53.905392   60183 driver.go:365] Setting default libvirt URI to qemu:///system
	I0725 13:21:53.973956   60183 docker.go:137] docker version: linux-20.10.17
	I0725 13:21:53.974120   60183 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 13:21:54.106665   60183 info.go:265] docker info: {ID:DEPZ:X5EQ:JUVI:7TAY:IXPP:M3QT:KGJK:NK3N:YSWM:HAHS:YLUF:RBE6 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 20:21:54.051064083 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 13:21:54.128839   60183 out.go:177] * Using the docker driver based on existing profile
	I0725 13:21:54.150256   60183 start.go:284] selected driver: docker
	I0725 13:21:54.150312   60183 start.go:808] validating driver "docker" against &{Name:old-k8s-version-20220725131610-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220725131610-44543 N
amespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: M
ultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 13:21:54.150444   60183 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 13:21:54.153661   60183 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 13:21:54.288038   60183 info.go:265] docker info: {ID:DEPZ:X5EQ:JUVI:7TAY:IXPP:M3QT:KGJK:NK3N:YSWM:HAHS:YLUF:RBE6 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 20:21:54.230541816 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 13:21:54.288195   60183 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 13:21:54.288211   60183 cni.go:95] Creating CNI manager for ""
	I0725 13:21:54.288221   60183 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 13:21:54.288229   60183 start_flags.go:310] config:
	{Name:old-k8s-version-20220725131610-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220725131610-44543 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mount
String:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 13:21:54.310268   60183 out.go:177] * Starting control plane node old-k8s-version-20220725131610-44543 in cluster old-k8s-version-20220725131610-44543
	I0725 13:21:54.332068   60183 cache.go:120] Beginning downloading kic base image for docker with docker
	I0725 13:21:54.353929   60183 out.go:177] * Pulling base image ...
	I0725 13:21:54.396171   60183 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0725 13:21:54.396230   60183 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0725 13:21:54.396268   60183 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0725 13:21:54.396303   60183 cache.go:57] Caching tarball of preloaded images
	I0725 13:21:54.396533   60183 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0725 13:21:54.396569   60183 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0725 13:21:54.397710   60183 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/config.json ...
	I0725 13:21:54.461117   60183 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0725 13:21:54.461134   60183 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0725 13:21:54.461150   60183 cache.go:208] Successfully downloaded all kic artifacts
	I0725 13:21:54.461221   60183 start.go:370] acquiring machines lock for old-k8s-version-20220725131610-44543: {Name:mka786150aa94c7510878ab5519b8cf30abe9378 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 13:21:54.461319   60183 start.go:374] acquired machines lock for "old-k8s-version-20220725131610-44543" in 74.735µs
	I0725 13:21:54.461339   60183 start.go:95] Skipping create...Using existing machine configuration
	I0725 13:21:54.461349   60183 fix.go:55] fixHost starting: 
	I0725 13:21:54.461599   60183 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220725131610-44543 --format={{.State.Status}}
	I0725 13:21:54.529917   60183 fix.go:103] recreateIfNeeded on old-k8s-version-20220725131610-44543: state=Stopped err=<nil>
	W0725 13:21:54.529947   60183 fix.go:129] unexpected machine state, will restart: <nil>
	I0725 13:21:54.573533   60183 out.go:177] * Restarting existing docker container for "old-k8s-version-20220725131610-44543" ...
	I0725 13:21:52.402493   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:21:54.901739   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:21:54.594675   60183 cli_runner.go:164] Run: docker start old-k8s-version-20220725131610-44543
	I0725 13:21:54.964125   60183 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220725131610-44543 --format={{.State.Status}}
	I0725 13:21:55.037820   60183 kic.go:415] container "old-k8s-version-20220725131610-44543" state is running.
	I0725 13:21:55.038433   60183 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220725131610-44543
	I0725 13:21:55.113560   60183 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/config.json ...
	I0725 13:21:55.114030   60183 machine.go:88] provisioning docker machine ...
	I0725 13:21:55.114068   60183 ubuntu.go:169] provisioning hostname "old-k8s-version-20220725131610-44543"
	I0725 13:21:55.114171   60183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725131610-44543
	I0725 13:21:55.190035   60183 main.go:134] libmachine: Using SSH client type: native
	I0725 13:21:55.190239   60183 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 58933 <nil> <nil>}
	I0725 13:21:55.190254   60183 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220725131610-44543 && echo "old-k8s-version-20220725131610-44543" | sudo tee /etc/hostname
	I0725 13:21:55.319366   60183 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220725131610-44543
	
	I0725 13:21:55.319439   60183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725131610-44543
	I0725 13:21:55.392552   60183 main.go:134] libmachine: Using SSH client type: native
	I0725 13:21:55.392712   60183 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 58933 <nil> <nil>}
	I0725 13:21:55.392732   60183 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220725131610-44543' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220725131610-44543/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220725131610-44543' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 13:21:55.513463   60183 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 13:21:55.513485   60183 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube}
	I0725 13:21:55.513514   60183 ubuntu.go:177] setting up certificates
	I0725 13:21:55.513524   60183 provision.go:83] configureAuth start
	I0725 13:21:55.513588   60183 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220725131610-44543
	I0725 13:21:55.584163   60183 provision.go:138] copyHostCerts
	I0725 13:21:55.584244   60183 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem, removing ...
	I0725 13:21:55.584253   60183 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem
	I0725 13:21:55.584354   60183 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem (1082 bytes)
	I0725 13:21:55.584593   60183 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem, removing ...
	I0725 13:21:55.584602   60183 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem
	I0725 13:21:55.584658   60183 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem (1123 bytes)
	I0725 13:21:55.584799   60183 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem, removing ...
	I0725 13:21:55.584805   60183 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem
	I0725 13:21:55.584862   60183 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem (1675 bytes)
	I0725 13:21:55.584974   60183 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220725131610-44543 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220725131610-44543]
	I0725 13:21:55.687712   60183 provision.go:172] copyRemoteCerts
	I0725 13:21:55.687798   60183 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 13:21:55.687857   60183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725131610-44543
	I0725 13:21:55.758975   60183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58933 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/old-k8s-version-20220725131610-44543/id_rsa Username:docker}
	I0725 13:21:55.843244   60183 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0725 13:21:55.859895   60183 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
	I0725 13:21:55.876505   60183 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 13:21:55.893602   60183 provision.go:86] duration metric: configureAuth took 380.052293ms
	I0725 13:21:55.893616   60183 ubuntu.go:193] setting minikube options for container-runtime
	I0725 13:21:55.893756   60183 config.go:178] Loaded profile config "old-k8s-version-20220725131610-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0725 13:21:55.893807   60183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725131610-44543
	I0725 13:21:55.964720   60183 main.go:134] libmachine: Using SSH client type: native
	I0725 13:21:55.964908   60183 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 58933 <nil> <nil>}
	I0725 13:21:55.964920   60183 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 13:21:56.084753   60183 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0725 13:21:56.084769   60183 ubuntu.go:71] root file system type: overlay
	I0725 13:21:56.084915   60183 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 13:21:56.084981   60183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725131610-44543
	I0725 13:21:56.155842   60183 main.go:134] libmachine: Using SSH client type: native
	I0725 13:21:56.155981   60183 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 58933 <nil> <nil>}
	I0725 13:21:56.156032   60183 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 13:21:56.286190   60183 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 13:21:56.286275   60183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725131610-44543
	I0725 13:21:56.357571   60183 main.go:134] libmachine: Using SSH client type: native
	I0725 13:21:56.357744   60183 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 58933 <nil> <nil>}
	I0725 13:21:56.357760   60183 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 13:21:56.482497   60183 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 13:21:56.482513   60183 machine.go:91] provisioned docker machine in 1.368435196s
	I0725 13:21:56.482522   60183 start.go:307] post-start starting for "old-k8s-version-20220725131610-44543" (driver="docker")
	I0725 13:21:56.482527   60183 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 13:21:56.482601   60183 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 13:21:56.482652   60183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725131610-44543
	I0725 13:21:56.554006   60183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58933 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/old-k8s-version-20220725131610-44543/id_rsa Username:docker}
	I0725 13:21:56.642412   60183 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 13:21:56.645967   60183 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0725 13:21:56.645982   60183 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0725 13:21:56.645989   60183 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0725 13:21:56.645993   60183 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0725 13:21:56.646005   60183 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/addons for local assets ...
	I0725 13:21:56.646118   60183 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files for local assets ...
	I0725 13:21:56.646284   60183 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem -> 445432.pem in /etc/ssl/certs
	I0725 13:21:56.646439   60183 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 13:21:56.653543   60183 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem --> /etc/ssl/certs/445432.pem (1708 bytes)
	I0725 13:21:56.673151   60183 start.go:310] post-start completed in 190.601782ms
	I0725 13:21:56.673236   60183 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 13:21:56.673292   60183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725131610-44543
	I0725 13:21:56.745535   60183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58933 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/old-k8s-version-20220725131610-44543/id_rsa Username:docker}
	I0725 13:21:56.830784   60183 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0725 13:21:56.836577   60183 fix.go:57] fixHost completed within 2.375156628s
	I0725 13:21:56.836597   60183 start.go:82] releasing machines lock for "old-k8s-version-20220725131610-44543", held for 2.375196554s
	I0725 13:21:56.836691   60183 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220725131610-44543
	I0725 13:21:56.908406   60183 ssh_runner.go:195] Run: systemctl --version
	I0725 13:21:56.908410   60183 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0725 13:21:56.908468   60183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725131610-44543
	I0725 13:21:56.908476   60183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220725131610-44543
	I0725 13:21:56.984091   60183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58933 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/old-k8s-version-20220725131610-44543/id_rsa Username:docker}
	I0725 13:21:56.985901   60183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58933 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/old-k8s-version-20220725131610-44543/id_rsa Username:docker}
	I0725 13:21:57.198212   60183 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 13:21:57.207890   60183 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0725 13:21:57.207956   60183 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 13:21:57.219448   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 13:21:57.232370   60183 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 13:21:57.302875   60183 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 13:21:57.376726   60183 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 13:21:57.442738   60183 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 13:21:57.646325   60183 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 13:21:57.685082   60183 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 13:21:57.778355   60183 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.17 ...
	I0725 13:21:57.778528   60183 cli_runner.go:164] Run: docker exec -t old-k8s-version-20220725131610-44543 dig +short host.docker.internal
	I0725 13:21:57.907625   60183 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0725 13:21:57.907747   60183 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0725 13:21:57.911756   60183 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 13:21:57.921003   60183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220725131610-44543
	I0725 13:21:57.991786   60183 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0725 13:21:57.991860   60183 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 13:21:58.022698   60183 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0725 13:21:58.022711   60183 docker.go:542] Images already preloaded, skipping extraction
	I0725 13:21:58.022798   60183 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 13:21:58.052074   60183 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0725 13:21:58.052091   60183 cache_images.go:84] Images are preloaded, skipping loading
	I0725 13:21:58.052214   60183 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0725 13:21:58.125974   60183 cni.go:95] Creating CNI manager for ""
	I0725 13:21:58.125987   60183 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 13:21:58.126001   60183 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0725 13:21:58.126035   60183 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220725131610-44543 NodeName:old-k8s-version-20220725131610-44543 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:systemd Clien
tCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0725 13:21:58.126181   60183 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-20220725131610-44543"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20220725131610-44543
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 13:21:58.126269   60183 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-20220725131610-44543 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220725131610-44543 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0725 13:21:58.126356   60183 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0725 13:21:58.134118   60183 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 13:21:58.134189   60183 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 13:21:58.141324   60183 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0725 13:21:58.154757   60183 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 13:21:58.167498   60183 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0725 13:21:58.179668   60183 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0725 13:21:58.183227   60183 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 13:21:58.192523   60183 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543 for IP: 192.168.67.2
	I0725 13:21:58.192631   60183 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.key
	I0725 13:21:58.192684   60183 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.key
	I0725 13:21:58.192765   60183 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/client.key
	I0725 13:21:58.192828   60183 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/apiserver.key.c7fa3a9e
	I0725 13:21:58.192872   60183 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/proxy-client.key
	I0725 13:21:58.193074   60183 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543.pem (1338 bytes)
	W0725 13:21:58.193119   60183 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543_empty.pem, impossibly tiny 0 bytes
	I0725 13:21:58.193132   60183 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 13:21:58.193167   60183 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem (1082 bytes)
	I0725 13:21:58.193202   60183 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem (1123 bytes)
	I0725 13:21:58.193229   60183 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem (1675 bytes)
	I0725 13:21:58.193300   60183 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem (1708 bytes)
	I0725 13:21:58.193838   60183 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0725 13:21:58.210321   60183 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0725 13:21:58.228970   60183 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 13:21:58.245718   60183 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/old-k8s-version-20220725131610-44543/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 13:21:58.262421   60183 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 13:21:58.279214   60183 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 13:21:58.297844   60183 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 13:21:58.314779   60183 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0725 13:21:58.331511   60183 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543.pem --> /usr/share/ca-certificates/44543.pem (1338 bytes)
	I0725 13:21:58.348755   60183 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem --> /usr/share/ca-certificates/445432.pem (1708 bytes)
	I0725 13:21:58.365526   60183 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 13:21:58.382721   60183 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 13:21:58.395675   60183 ssh_runner.go:195] Run: openssl version
	I0725 13:21:58.401635   60183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 13:21:58.409787   60183 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 13:21:58.413787   60183 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 25 19:19 /usr/share/ca-certificates/minikubeCA.pem
	I0725 13:21:58.413829   60183 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 13:21:58.419159   60183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 13:21:58.426230   60183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44543.pem && ln -fs /usr/share/ca-certificates/44543.pem /etc/ssl/certs/44543.pem"
	I0725 13:21:58.434193   60183 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44543.pem
	I0725 13:21:58.438053   60183 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 25 19:24 /usr/share/ca-certificates/44543.pem
	I0725 13:21:58.438096   60183 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44543.pem
	I0725 13:21:58.443183   60183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/44543.pem /etc/ssl/certs/51391683.0"
	I0725 13:21:58.450469   60183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/445432.pem && ln -fs /usr/share/ca-certificates/445432.pem /etc/ssl/certs/445432.pem"
	I0725 13:21:58.457925   60183 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/445432.pem
	I0725 13:21:58.461769   60183 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 25 19:24 /usr/share/ca-certificates/445432.pem
	I0725 13:21:58.461816   60183 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/445432.pem
	I0725 13:21:58.467074   60183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/445432.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 13:21:58.474326   60183 kubeadm.go:395] StartCluster: {Name:old-k8s-version-20220725131610-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220725131610-44543 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:fa
lse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 13:21:58.474425   60183 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 13:21:58.502814   60183 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 13:21:58.510458   60183 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0725 13:21:58.510472   60183 kubeadm.go:626] restartCluster start
	I0725 13:21:58.510516   60183 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 13:21:58.517042   60183 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:21:58.517101   60183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220725131610-44543
	I0725 13:21:58.590607   60183 kubeconfig.go:116] verify returned: extract IP: "old-k8s-version-20220725131610-44543" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	I0725 13:21:58.590795   60183 kubeconfig.go:127] "old-k8s-version-20220725131610-44543" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig - will repair!
	I0725 13:21:58.591098   60183 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig: {Name:mk33e723b045d7cbb2c524ed31a7e78a4a0b6415 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 13:21:58.592462   60183 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 13:21:58.600334   60183 api_server.go:165] Checking apiserver status ...
	I0725 13:21:58.600385   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:21:58.608459   60183 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:21:57.402044   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:21:59.904598   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:21:58.808842   60183 api_server.go:165] Checking apiserver status ...
	I0725 13:21:58.808962   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:21:58.817999   60183 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:21:59.008657   60183 api_server.go:165] Checking apiserver status ...
	I0725 13:21:59.008819   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:21:59.019192   60183 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:21:59.210602   60183 api_server.go:165] Checking apiserver status ...
	I0725 13:21:59.210815   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:21:59.221605   60183 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:21:59.408833   60183 api_server.go:165] Checking apiserver status ...
	I0725 13:21:59.408950   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:21:59.417472   60183 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:21:59.609619   60183 api_server.go:165] Checking apiserver status ...
	I0725 13:21:59.609820   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:21:59.621045   60183 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:21:59.809314   60183 api_server.go:165] Checking apiserver status ...
	I0725 13:21:59.809409   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:21:59.821368   60183 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:22:00.008728   60183 api_server.go:165] Checking apiserver status ...
	I0725 13:22:00.008894   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:22:00.018811   60183 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:22:00.208723   60183 api_server.go:165] Checking apiserver status ...
	I0725 13:22:00.208885   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:22:00.219444   60183 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:22:00.408638   60183 api_server.go:165] Checking apiserver status ...
	I0725 13:22:00.408732   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:22:00.417392   60183 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:22:00.610672   60183 api_server.go:165] Checking apiserver status ...
	I0725 13:22:00.610860   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:22:00.621365   60183 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:22:00.808746   60183 api_server.go:165] Checking apiserver status ...
	I0725 13:22:00.808881   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:22:00.818878   60183 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:22:01.009664   60183 api_server.go:165] Checking apiserver status ...
	I0725 13:22:01.009771   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:22:01.020457   60183 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:22:01.208785   60183 api_server.go:165] Checking apiserver status ...
	I0725 13:22:01.208891   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:22:01.217523   60183 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:22:01.409152   60183 api_server.go:165] Checking apiserver status ...
	I0725 13:22:01.409246   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:22:01.418133   60183 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:22:01.608696   60183 api_server.go:165] Checking apiserver status ...
	I0725 13:22:01.608826   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:22:01.618526   60183 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:22:01.618536   60183 api_server.go:165] Checking apiserver status ...
	I0725 13:22:01.618580   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:22:01.626858   60183 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:22:01.626869   60183 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0725 13:22:01.626876   60183 kubeadm.go:1092] stopping kube-system containers ...
	I0725 13:22:01.626930   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 13:22:01.657002   60183 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 13:22:01.667081   60183 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 13:22:01.674438   60183 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5747 Jul 25 20:18 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5783 Jul 25 20:18 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5935 Jul 25 20:18 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5731 Jul 25 20:18 /etc/kubernetes/scheduler.conf
	
	I0725 13:22:01.674489   60183 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 13:22:01.681528   60183 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 13:22:01.688711   60183 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 13:22:01.695801   60183 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 13:22:01.703394   60183 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 13:22:01.710791   60183 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0725 13:22:01.710802   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:22:01.761240   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:22:02.581237   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:22:02.790070   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:22:02.852549   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:22:02.904809   60183 api_server.go:51] waiting for apiserver process to appear ...
	I0725 13:22:02.904874   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:03.415372   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:02.402433   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:22:04.905321   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:22:03.913636   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:04.413713   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:04.913407   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:05.413354   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:05.913417   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:06.413486   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:06.915522   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:07.414044   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:07.915400   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:08.413594   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:07.403006   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:22:09.903435   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:22:08.915541   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:09.413617   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:09.914519   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:10.413482   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:10.915473   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:11.413695   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:11.913720   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:12.414308   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:12.914018   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:13.413606   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:12.403283   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:22:14.905458   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:22:13.913946   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:14.415576   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:14.915757   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:15.413747   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:15.913751   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:16.413958   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:16.913984   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:17.413766   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:17.915900   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:18.414053   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:17.402506   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:22:19.905164   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:22:18.915793   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:19.413913   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:19.914049   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:20.413988   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:20.914020   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:21.414013   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:21.914244   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:22.414678   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:22.915965   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:23.416018   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:22.403157   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:22:24.905435   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:22:23.913889   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:24.414309   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:24.914097   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:25.416085   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:25.914557   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:26.414004   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:26.913961   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:27.415609   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:27.915017   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:28.414405   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:27.402921   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:22:29.903644   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:22:31.905897   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:22:28.914648   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:29.416017   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:29.914651   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:30.416213   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:30.914111   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:31.414680   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:31.914434   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:32.416332   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:32.914962   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:33.415016   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:34.403864   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:22:36.404403   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:22:33.914290   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:34.416347   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:34.915975   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:35.414989   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:35.914255   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:36.416340   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:36.914596   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:37.414585   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:37.914476   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:38.415904   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:38.405715   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:22:40.905906   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:22:38.914361   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:39.415270   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:39.914715   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:40.414871   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:40.915265   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:41.414455   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:41.915093   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:42.414441   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:42.914544   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:43.414430   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:43.406177   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:22:45.904207   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:22:43.914464   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:44.414683   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:44.915872   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:45.415033   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:45.916689   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:46.415363   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:46.914864   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:47.415651   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:47.914762   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:48.414887   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:47.905078   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:22:50.404518   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:22:48.914639   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:49.415136   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:49.914786   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:50.415401   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:50.916109   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:51.415002   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:51.915039   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:52.415197   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:52.916879   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:53.414799   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:52.904686   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:22:55.403351   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:22:53.916903   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:54.414955   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:54.915486   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:55.415041   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:55.916580   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:56.415003   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:56.915430   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:57.414998   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:57.916705   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:58.415210   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:57.404576   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:22:59.906226   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:23:01.906333   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:22:58.914921   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:59.415400   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:22:59.915038   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:23:00.415912   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:23:00.915078   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:23:01.414978   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:23:01.915311   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:23:02.415864   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:23:02.915524   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:23:02.945401   60183 logs.go:274] 0 containers: []
	W0725 13:23:02.945413   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:23:02.945478   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:23:02.973644   60183 logs.go:274] 0 containers: []
	W0725 13:23:02.973657   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:23:02.973724   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:23:03.002721   60183 logs.go:274] 0 containers: []
	W0725 13:23:03.002734   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:23:03.002788   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:23:03.031519   60183 logs.go:274] 0 containers: []
	W0725 13:23:03.031535   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:23:03.031603   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:23:03.061426   60183 logs.go:274] 0 containers: []
	W0725 13:23:03.061439   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:23:03.061493   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:23:03.089574   60183 logs.go:274] 0 containers: []
	W0725 13:23:03.089587   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:23:03.089645   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:23:03.118793   60183 logs.go:274] 0 containers: []
	W0725 13:23:03.118804   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:23:03.118869   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:23:03.148187   60183 logs.go:274] 0 containers: []
	W0725 13:23:03.148199   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:23:03.148205   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:23:03.148211   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:23:03.189187   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:23:03.189204   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:23:03.200922   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:23:03.200939   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:23:03.253329   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:23:03.253345   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:23:03.253354   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:23:03.267096   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:23:03.267108   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:23:04.404910   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:23:06.406259   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:23:05.318288   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051108125s)
	I0725 13:23:07.820791   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:23:07.916670   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:23:07.946805   60183 logs.go:274] 0 containers: []
	W0725 13:23:07.946817   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:23:07.946877   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:23:07.976713   60183 logs.go:274] 0 containers: []
	W0725 13:23:07.976727   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:23:07.976787   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:23:08.008280   60183 logs.go:274] 0 containers: []
	W0725 13:23:08.008294   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:23:08.008368   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:23:08.039002   60183 logs.go:274] 0 containers: []
	W0725 13:23:08.039018   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:23:08.039079   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:23:08.068905   60183 logs.go:274] 0 containers: []
	W0725 13:23:08.068916   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:23:08.068975   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:23:08.097527   60183 logs.go:274] 0 containers: []
	W0725 13:23:08.097539   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:23:08.097606   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:23:08.125958   60183 logs.go:274] 0 containers: []
	W0725 13:23:08.125970   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:23:08.126034   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:23:08.154963   60183 logs.go:274] 0 containers: []
	W0725 13:23:08.154976   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:23:08.154983   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:23:08.154989   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:23:08.199198   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:23:08.199212   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:23:08.210469   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:23:08.210485   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:23:08.263518   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:23:08.263531   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:23:08.263538   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:23:08.277559   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:23:08.277572   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:23:08.907160   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:23:10.907402   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:23:10.326919   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.04927568s)
	I0725 13:23:12.827696   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:23:12.915338   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:23:12.944085   60183 logs.go:274] 0 containers: []
	W0725 13:23:12.944096   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:23:12.944151   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:23:12.974168   60183 logs.go:274] 0 containers: []
	W0725 13:23:12.974180   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:23:12.974244   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:23:13.002821   60183 logs.go:274] 0 containers: []
	W0725 13:23:13.002833   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:23:13.002887   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:23:13.031211   60183 logs.go:274] 0 containers: []
	W0725 13:23:13.031224   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:23:13.031281   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:23:13.060657   60183 logs.go:274] 0 containers: []
	W0725 13:23:13.060672   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:23:13.060728   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:23:13.089071   60183 logs.go:274] 0 containers: []
	W0725 13:23:13.089083   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:23:13.089145   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:23:13.118878   60183 logs.go:274] 0 containers: []
	W0725 13:23:13.118891   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:23:13.118949   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:23:13.147109   60183 logs.go:274] 0 containers: []
	W0725 13:23:13.147120   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:23:13.147149   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:23:13.147161   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:23:13.159243   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:23:13.159254   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:23:13.212182   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:23:13.212193   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:23:13.212202   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:23:13.227312   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:23:13.227327   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:23:13.406101   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:23:15.907656   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:23:15.282546   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.0551466s)
	I0725 13:23:15.282653   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:23:15.282659   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:23:17.824516   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:23:17.916271   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:23:17.946818   60183 logs.go:274] 0 containers: []
	W0725 13:23:17.946831   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:23:17.946889   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:23:17.975561   60183 logs.go:274] 0 containers: []
	W0725 13:23:17.975573   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:23:17.975634   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:23:18.004924   60183 logs.go:274] 0 containers: []
	W0725 13:23:18.004936   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:23:18.004998   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:23:18.033904   60183 logs.go:274] 0 containers: []
	W0725 13:23:18.033916   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:23:18.033972   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:23:18.063640   60183 logs.go:274] 0 containers: []
	W0725 13:23:18.063653   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:23:18.063713   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:23:18.091848   60183 logs.go:274] 0 containers: []
	W0725 13:23:18.091864   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:23:18.091918   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:23:18.120698   60183 logs.go:274] 0 containers: []
	W0725 13:23:18.120710   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:23:18.120772   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:23:18.150302   60183 logs.go:274] 0 containers: []
	W0725 13:23:18.150314   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:23:18.150321   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:23:18.150328   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:23:18.189307   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:23:18.189321   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:23:18.201238   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:23:18.201251   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:23:18.257070   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:23:18.257081   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:23:18.257091   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:23:18.271090   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:23:18.271102   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:23:18.406128   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:23:20.907817   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:23:20.325947   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054774189s)
	I0725 13:23:22.827182   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:23:22.916614   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:23:22.947019   60183 logs.go:274] 0 containers: []
	W0725 13:23:22.947032   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:23:22.947094   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:23:22.976102   60183 logs.go:274] 0 containers: []
	W0725 13:23:22.976115   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:23:22.976175   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:23:23.005390   60183 logs.go:274] 0 containers: []
	W0725 13:23:23.005405   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:23:23.005472   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:23:23.036043   60183 logs.go:274] 0 containers: []
	W0725 13:23:23.036058   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:23:23.036113   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:23:23.065291   60183 logs.go:274] 0 containers: []
	W0725 13:23:23.065303   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:23:23.065362   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:23:23.094601   60183 logs.go:274] 0 containers: []
	W0725 13:23:23.094612   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:23:23.094677   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:23:23.124130   60183 logs.go:274] 0 containers: []
	W0725 13:23:23.124142   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:23:23.124197   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:23:23.152885   60183 logs.go:274] 0 containers: []
	W0725 13:23:23.152898   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:23:23.152906   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:23:23.152915   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:23:23.207267   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:23:23.207277   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:23:23.207303   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:23:23.220621   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:23:23.220633   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:23:23.407291   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:23:25.907430   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:23:25.277761   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057055425s)
	I0725 13:23:25.277872   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:23:25.277880   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:23:25.317120   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:23:25.317134   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:23:27.830407   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:23:27.916091   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:23:27.947876   60183 logs.go:274] 0 containers: []
	W0725 13:23:27.947889   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:23:27.947943   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:23:27.976656   60183 logs.go:274] 0 containers: []
	W0725 13:23:27.976668   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:23:27.976726   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:23:28.005656   60183 logs.go:274] 0 containers: []
	W0725 13:23:28.005669   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:23:28.005726   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:23:28.035060   60183 logs.go:274] 0 containers: []
	W0725 13:23:28.035072   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:23:28.035132   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:23:28.063371   60183 logs.go:274] 0 containers: []
	W0725 13:23:28.063395   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:23:28.063456   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:23:28.093066   60183 logs.go:274] 0 containers: []
	W0725 13:23:28.093078   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:23:28.093142   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:23:28.121760   60183 logs.go:274] 0 containers: []
	W0725 13:23:28.121773   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:23:28.121829   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:23:28.150873   60183 logs.go:274] 0 containers: []
	W0725 13:23:28.150885   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:23:28.150891   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:23:28.150901   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:23:28.166253   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:23:28.166265   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:23:28.405040   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:23:30.405887   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:23:30.219274   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052936893s)
	I0725 13:23:30.219386   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:23:30.219393   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:23:30.259179   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:23:30.259192   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:23:30.270501   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:23:30.270513   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:23:30.323106   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:23:32.825418   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:23:32.915954   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:23:32.946505   60183 logs.go:274] 0 containers: []
	W0725 13:23:32.946517   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:23:32.946580   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:23:32.976363   60183 logs.go:274] 0 containers: []
	W0725 13:23:32.976376   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:23:32.976442   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:23:33.004925   60183 logs.go:274] 0 containers: []
	W0725 13:23:33.004938   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:23:33.004996   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:23:33.034716   60183 logs.go:274] 0 containers: []
	W0725 13:23:33.034728   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:23:33.034788   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:23:33.062554   60183 logs.go:274] 0 containers: []
	W0725 13:23:33.062566   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:23:33.062623   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:23:33.091734   60183 logs.go:274] 0 containers: []
	W0725 13:23:33.091746   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:23:33.091805   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:23:33.120846   60183 logs.go:274] 0 containers: []
	W0725 13:23:33.120858   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:23:33.120924   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:23:33.149461   60183 logs.go:274] 0 containers: []
	W0725 13:23:33.149474   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:23:33.149481   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:23:33.149492   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:23:33.188609   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:23:33.188621   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:23:33.200250   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:23:33.200263   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:23:33.252688   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:23:33.252701   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:23:33.252711   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:23:33.266791   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:23:33.266803   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:23:32.907384   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:23:34.907760   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:23:35.325253   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05837666s)
	I0725 13:23:37.827729   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:23:37.918114   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:23:37.948670   60183 logs.go:274] 0 containers: []
	W0725 13:23:37.948682   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:23:37.948740   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:23:37.978509   60183 logs.go:274] 0 containers: []
	W0725 13:23:37.978521   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:23:37.978606   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:23:38.008790   60183 logs.go:274] 0 containers: []
	W0725 13:23:38.008805   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:23:38.008873   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:23:38.037601   60183 logs.go:274] 0 containers: []
	W0725 13:23:38.037614   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:23:38.037674   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:23:38.066393   60183 logs.go:274] 0 containers: []
	W0725 13:23:38.066407   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:23:38.066480   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:23:38.094341   60183 logs.go:274] 0 containers: []
	W0725 13:23:38.094354   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:23:38.094413   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:23:38.123151   60183 logs.go:274] 0 containers: []
	W0725 13:23:38.123163   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:23:38.123228   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:23:38.151883   60183 logs.go:274] 0 containers: []
	W0725 13:23:38.151894   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:23:38.151901   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:23:38.151913   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:23:38.164057   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:23:38.164070   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:23:38.217391   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:23:38.217404   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:23:38.217411   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:23:38.232266   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:23:38.232279   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:23:37.406227   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:23:39.905335   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:23:41.906715   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:23:40.284014   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051664102s)
	I0725 13:23:40.284120   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:23:40.284127   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:23:42.824116   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:23:42.916418   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:23:42.945082   60183 logs.go:274] 0 containers: []
	W0725 13:23:42.945095   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:23:42.945161   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:23:42.976210   60183 logs.go:274] 0 containers: []
	W0725 13:23:42.976221   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:23:42.976283   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:23:43.004760   60183 logs.go:274] 0 containers: []
	W0725 13:23:43.004772   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:23:43.004828   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:23:43.034045   60183 logs.go:274] 0 containers: []
	W0725 13:23:43.034057   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:23:43.034136   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:23:43.063735   60183 logs.go:274] 0 containers: []
	W0725 13:23:43.063747   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:23:43.063807   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:23:43.092971   60183 logs.go:274] 0 containers: []
	W0725 13:23:43.092984   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:23:43.093046   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:23:43.122089   60183 logs.go:274] 0 containers: []
	W0725 13:23:43.122102   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:23:43.122165   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:23:43.151913   60183 logs.go:274] 0 containers: []
	W0725 13:23:43.151927   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:23:43.151933   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:23:43.151940   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:23:43.191482   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:23:43.191500   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:23:43.204833   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:23:43.204851   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:23:43.266710   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:23:43.266721   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:23:43.266728   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:23:43.280481   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:23:43.280493   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:23:44.406475   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:23:46.406672   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:23:45.335689   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055123517s)
	I0725 13:23:47.836914   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:23:47.916508   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:23:47.947483   60183 logs.go:274] 0 containers: []
	W0725 13:23:47.947496   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:23:47.947555   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:23:47.976844   60183 logs.go:274] 0 containers: []
	W0725 13:23:47.976858   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:23:47.976921   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:23:48.006778   60183 logs.go:274] 0 containers: []
	W0725 13:23:48.006790   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:23:48.006847   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:23:48.036361   60183 logs.go:274] 0 containers: []
	W0725 13:23:48.036374   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:23:48.036438   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:23:48.066116   60183 logs.go:274] 0 containers: []
	W0725 13:23:48.066132   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:23:48.066196   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:23:48.095236   60183 logs.go:274] 0 containers: []
	W0725 13:23:48.095249   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:23:48.095308   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:23:48.124615   60183 logs.go:274] 0 containers: []
	W0725 13:23:48.124627   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:23:48.124684   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:23:48.154933   60183 logs.go:274] 0 containers: []
	W0725 13:23:48.154945   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:23:48.154951   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:23:48.154958   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:23:48.196269   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:23:48.196282   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:23:48.208071   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:23:48.208082   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:23:48.261791   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:23:48.261801   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:23:48.261807   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:23:48.275612   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:23:48.275624   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:23:48.408066   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:23:50.906907   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:23:50.328143   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05244781s)
	I0725 13:23:52.830598   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:23:52.916555   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:23:52.946432   60183 logs.go:274] 0 containers: []
	W0725 13:23:52.946449   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:23:52.946523   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:23:52.976593   60183 logs.go:274] 0 containers: []
	W0725 13:23:52.976605   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:23:52.976673   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:23:53.010114   60183 logs.go:274] 0 containers: []
	W0725 13:23:53.010126   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:23:53.010182   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:23:53.038708   60183 logs.go:274] 0 containers: []
	W0725 13:23:53.038720   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:23:53.038781   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:23:53.067454   60183 logs.go:274] 0 containers: []
	W0725 13:23:53.067466   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:23:53.067528   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:23:53.095959   60183 logs.go:274] 0 containers: []
	W0725 13:23:53.095971   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:23:53.096030   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:23:53.126372   60183 logs.go:274] 0 containers: []
	W0725 13:23:53.126385   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:23:53.126450   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:23:53.155509   60183 logs.go:274] 0 containers: []
	W0725 13:23:53.155523   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:23:53.155530   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:23:53.155537   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:23:53.195731   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:23:53.195744   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:23:53.207459   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:23:53.207473   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:23:53.260748   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:23:53.260768   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:23:53.260775   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:23:53.274157   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:23:53.274169   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:23:53.407193   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:23:55.408050   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:23:55.324854   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.0506133s)
	I0725 13:23:57.825573   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:23:57.917185   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:23:57.947745   60183 logs.go:274] 0 containers: []
	W0725 13:23:57.947758   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:23:57.947814   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:23:57.975616   60183 logs.go:274] 0 containers: []
	W0725 13:23:57.975628   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:23:57.975690   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:23:58.004104   60183 logs.go:274] 0 containers: []
	W0725 13:23:58.004116   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:23:58.004180   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:23:58.032249   60183 logs.go:274] 0 containers: []
	W0725 13:23:58.032261   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:23:58.032330   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:23:58.062006   60183 logs.go:274] 0 containers: []
	W0725 13:23:58.062021   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:23:58.062074   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:23:58.090537   60183 logs.go:274] 0 containers: []
	W0725 13:23:58.090548   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:23:58.090607   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:23:58.119003   60183 logs.go:274] 0 containers: []
	W0725 13:23:58.119015   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:23:58.119071   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:23:58.149646   60183 logs.go:274] 0 containers: []
	W0725 13:23:58.149660   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:23:58.149668   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:23:58.149677   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:23:57.905980   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:23:59.906275   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:24:00.207223   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057470129s)
	I0725 13:24:00.207346   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:24:00.207356   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:24:00.246278   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:24:00.246294   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:24:00.257799   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:24:00.257812   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:24:00.311151   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:24:00.311187   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:24:00.311201   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:24:02.825458   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:24:02.916753   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:24:02.946058   60183 logs.go:274] 0 containers: []
	W0725 13:24:02.946070   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:24:02.946127   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:24:02.974437   60183 logs.go:274] 0 containers: []
	W0725 13:24:02.974450   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:24:02.974506   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:24:03.004307   60183 logs.go:274] 0 containers: []
	W0725 13:24:03.004320   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:24:03.004405   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:24:03.034237   60183 logs.go:274] 0 containers: []
	W0725 13:24:03.034248   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:24:03.034308   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:24:03.066725   60183 logs.go:274] 0 containers: []
	W0725 13:24:03.066737   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:24:03.066792   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:24:03.097377   60183 logs.go:274] 0 containers: []
	W0725 13:24:03.097389   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:24:03.097449   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:24:03.126782   60183 logs.go:274] 0 containers: []
	W0725 13:24:03.126794   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:24:03.126857   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:24:03.155129   60183 logs.go:274] 0 containers: []
	W0725 13:24:03.155142   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:24:03.155149   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:24:03.155155   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:24:03.195481   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:24:03.195494   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:24:03.206820   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:24:03.206835   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:24:03.259802   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:24:03.259812   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:24:03.259818   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:24:03.273974   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:24:03.273987   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:24:02.406621   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:24:04.908053   59920 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace has status "Ready":"False"
	I0725 13:24:06.901898   59920 pod_ready.go:81] duration metric: took 4m0.006312009s waiting for pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace to be "Ready" ...
	E0725 13:24:06.902006   59920 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-5c6f97fb75-dlbq9" in "kube-system" namespace to be "Ready" (will not retry!)
	I0725 13:24:06.902038   59920 pod_ready.go:38] duration metric: took 4m15.045888844s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 13:24:06.902078   59920 kubeadm.go:630] restartCluster took 4m24.382062853s
	W0725 13:24:06.902211   59920 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0725 13:24:06.902238   59920 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0725 13:24:05.328003   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053944862s)
	I0725 13:24:07.828361   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:24:07.917072   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:24:07.949966   60183 logs.go:274] 0 containers: []
	W0725 13:24:07.949983   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:24:07.950052   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:24:07.988332   60183 logs.go:274] 0 containers: []
	W0725 13:24:07.988346   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:24:07.988409   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:24:08.027678   60183 logs.go:274] 0 containers: []
	W0725 13:24:08.027690   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:24:08.027756   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:24:08.059823   60183 logs.go:274] 0 containers: []
	W0725 13:24:08.059836   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:24:08.059905   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:24:08.093298   60183 logs.go:274] 0 containers: []
	W0725 13:24:08.093311   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:24:08.093374   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:24:08.131132   60183 logs.go:274] 0 containers: []
	W0725 13:24:08.131144   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:24:08.131200   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:24:08.163873   60183 logs.go:274] 0 containers: []
	W0725 13:24:08.163888   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:24:08.163950   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:24:08.195373   60183 logs.go:274] 0 containers: []
	W0725 13:24:08.195386   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:24:08.195392   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:24:08.195399   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:24:08.239634   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:24:08.239650   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:24:08.257904   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:24:08.257919   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:24:08.319885   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:24:08.319898   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:24:08.319904   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:24:08.336710   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:24:08.336724   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:24:09.233197   59920 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (2.330876471s)
	I0725 13:24:09.233255   59920 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 13:24:09.242559   59920 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 13:24:09.250536   59920 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0725 13:24:09.250580   59920 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 13:24:09.257644   59920 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 13:24:09.257666   59920 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0725 13:24:09.560158   59920 out.go:204]   - Generating certificates and keys ...
	I0725 13:24:10.140505   59920 out.go:204]   - Booting up control plane ...
	I0725 13:24:10.401425   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.06462793s)
	I0725 13:24:12.901765   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:24:12.917037   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:24:12.952668   60183 logs.go:274] 0 containers: []
	W0725 13:24:12.952681   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:24:12.952736   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:24:12.982943   60183 logs.go:274] 0 containers: []
	W0725 13:24:12.982955   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:24:12.983017   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:24:13.013797   60183 logs.go:274] 0 containers: []
	W0725 13:24:13.013810   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:24:13.013876   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:24:13.044254   60183 logs.go:274] 0 containers: []
	W0725 13:24:13.044267   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:24:13.044326   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:24:13.074217   60183 logs.go:274] 0 containers: []
	W0725 13:24:13.074230   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:24:13.074293   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:24:13.109560   60183 logs.go:274] 0 containers: []
	W0725 13:24:13.109573   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:24:13.109636   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:24:13.140893   60183 logs.go:274] 0 containers: []
	W0725 13:24:13.140906   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:24:13.140965   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:24:13.176452   60183 logs.go:274] 0 containers: []
	W0725 13:24:13.176466   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:24:13.176474   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:24:13.176482   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:24:13.221236   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:24:13.221274   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:24:13.234259   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:24:13.234274   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:24:13.291367   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:24:13.291377   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:24:13.291384   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:24:13.306619   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:24:13.306632   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:24:17.187291   59920 out.go:204]   - Configuring RBAC rules ...
	I0725 13:24:15.363070   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056365342s)
	I0725 13:24:17.865239   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:24:17.917268   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:24:17.948026   60183 logs.go:274] 0 containers: []
	W0725 13:24:17.948038   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:24:17.948094   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:24:17.978209   60183 logs.go:274] 0 containers: []
	W0725 13:24:17.978222   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:24:17.978280   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:24:18.006707   60183 logs.go:274] 0 containers: []
	W0725 13:24:18.006718   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:24:18.006775   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:24:18.037659   60183 logs.go:274] 0 containers: []
	W0725 13:24:18.037671   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:24:18.037726   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:24:18.065998   60183 logs.go:274] 0 containers: []
	W0725 13:24:18.066016   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:24:18.066075   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:24:18.096217   60183 logs.go:274] 0 containers: []
	W0725 13:24:18.096230   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:24:18.096286   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:24:18.126356   60183 logs.go:274] 0 containers: []
	W0725 13:24:18.126369   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:24:18.126427   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:24:18.155056   60183 logs.go:274] 0 containers: []
	W0725 13:24:18.155068   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:24:18.155074   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:24:18.155088   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:24:18.210436   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:24:18.210447   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:24:18.210455   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:24:18.224505   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:24:18.224517   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:24:17.598425   59920 cni.go:95] Creating CNI manager for ""
	I0725 13:24:17.598438   59920 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 13:24:17.598460   59920 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 13:24:17.598535   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=a5b59bcfc16aadb787d3d4f0635e06172b98dce6 minikube.k8s.io/name=no-preload-20220725131741-44543 minikube.k8s.io/updated_at=2022_07_25T13_24_17_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:17.598542   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:17.739103   59920 ops.go:34] apiserver oom_adj: -16
	I0725 13:24:17.739207   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:18.299078   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:18.799356   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:19.298748   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:19.798859   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:20.298802   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:20.800927   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:21.298883   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:21.798915   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:22.298950   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:20.280940   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056351635s)
	I0725 13:24:20.281045   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:24:20.281052   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:24:20.322100   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:24:20.322118   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:24:22.836188   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:24:22.918171   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:24:22.949256   60183 logs.go:274] 0 containers: []
	W0725 13:24:22.949269   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:24:22.949330   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:24:22.979856   60183 logs.go:274] 0 containers: []
	W0725 13:24:22.979872   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:24:22.979930   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:24:23.009212   60183 logs.go:274] 0 containers: []
	W0725 13:24:23.009224   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:24:23.009280   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:24:23.040003   60183 logs.go:274] 0 containers: []
	W0725 13:24:23.040014   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:24:23.040069   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:24:23.070063   60183 logs.go:274] 0 containers: []
	W0725 13:24:23.070075   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:24:23.070129   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:24:23.098168   60183 logs.go:274] 0 containers: []
	W0725 13:24:23.098181   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:24:23.098239   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:24:23.127379   60183 logs.go:274] 0 containers: []
	W0725 13:24:23.127392   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:24:23.127449   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:24:23.156617   60183 logs.go:274] 0 containers: []
	W0725 13:24:23.156630   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:24:23.156637   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:24:23.156644   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:24:23.208837   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:24:23.208847   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:24:23.208854   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:24:23.222431   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:24:23.222443   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:24:22.800905   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:23.299002   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:23.799005   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:24.299025   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:24.800802   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:25.299374   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:25.799034   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:26.300134   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:26.799616   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:27.299108   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:25.276610   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054096015s)
	I0725 13:24:25.276716   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:24:25.276723   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:24:25.317113   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:24:25.317132   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:24:27.831788   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:24:27.917665   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:24:27.951671   60183 logs.go:274] 0 containers: []
	W0725 13:24:27.951683   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:24:27.951742   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:24:27.981792   60183 logs.go:274] 0 containers: []
	W0725 13:24:27.981805   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:24:27.981861   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:24:28.010660   60183 logs.go:274] 0 containers: []
	W0725 13:24:28.010675   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:24:28.010745   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:24:28.039897   60183 logs.go:274] 0 containers: []
	W0725 13:24:28.039910   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:24:28.039966   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:24:28.069312   60183 logs.go:274] 0 containers: []
	W0725 13:24:28.069324   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:24:28.069379   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:24:28.098531   60183 logs.go:274] 0 containers: []
	W0725 13:24:28.098544   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:24:28.098599   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:24:28.127653   60183 logs.go:274] 0 containers: []
	W0725 13:24:28.127666   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:24:28.127720   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:24:28.156147   60183 logs.go:274] 0 containers: []
	W0725 13:24:28.156162   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:24:28.156169   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:24:28.156177   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:24:28.202017   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:24:28.202037   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:24:28.219890   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:24:28.219905   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:24:28.279250   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:24:28.279263   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:24:28.279270   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:24:28.294488   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:24:28.294502   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:24:27.801186   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:28.299013   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:28.800294   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:29.300649   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:29.799066   59920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:24:29.855550   59920 kubeadm.go:1045] duration metric: took 12.256709186s to wait for elevateKubeSystemPrivileges.
	I0725 13:24:29.855565   59920 kubeadm.go:397] StartCluster complete in 4m47.372303679s
	I0725 13:24:29.855580   59920 settings.go:142] acquiring lock: {Name:mk9b5e011e5806157f9a122f8c65a6cc16a3d9e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 13:24:29.855656   59920 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	I0725 13:24:29.856179   59920 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig: {Name:mk33e723b045d7cbb2c524ed31a7e78a4a0b6415 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 13:24:30.372184   59920 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220725131741-44543" rescaled to 1
	I0725 13:24:30.372224   59920 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 13:24:30.372230   59920 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0725 13:24:30.372264   59920 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0725 13:24:30.372424   59920 config.go:178] Loaded profile config "no-preload-20220725131741-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0725 13:24:30.392785   59920 out.go:177] * Verifying Kubernetes components...
	I0725 13:24:30.392883   59920 addons.go:65] Setting dashboard=true in profile "no-preload-20220725131741-44543"
	I0725 13:24:30.392886   59920 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20220725131741-44543"
	I0725 13:24:30.413664   59920 addons.go:153] Setting addon dashboard=true in "no-preload-20220725131741-44543"
	W0725 13:24:30.413684   59920 addons.go:162] addon dashboard should already be in state true
	I0725 13:24:30.413691   59920 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 13:24:30.413664   59920 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20220725131741-44543"
	W0725 13:24:30.413704   59920 addons.go:162] addon storage-provisioner should already be in state true
	I0725 13:24:30.392924   59920 addons.go:65] Setting metrics-server=true in profile "no-preload-20220725131741-44543"
	I0725 13:24:30.413726   59920 addons.go:153] Setting addon metrics-server=true in "no-preload-20220725131741-44543"
	I0725 13:24:30.413744   59920 host.go:66] Checking if "no-preload-20220725131741-44543" exists ...
	I0725 13:24:30.413746   59920 host.go:66] Checking if "no-preload-20220725131741-44543" exists ...
	I0725 13:24:30.392882   59920 addons.go:65] Setting default-storageclass=true in profile "no-preload-20220725131741-44543"
	W0725 13:24:30.413754   59920 addons.go:162] addon metrics-server should already be in state true
	I0725 13:24:30.413768   59920 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20220725131741-44543"
	I0725 13:24:30.413781   59920 host.go:66] Checking if "no-preload-20220725131741-44543" exists ...
	I0725 13:24:30.414105   59920 cli_runner.go:164] Run: docker container inspect no-preload-20220725131741-44543 --format={{.State.Status}}
	I0725 13:24:30.414123   59920 cli_runner.go:164] Run: docker container inspect no-preload-20220725131741-44543 --format={{.State.Status}}
	I0725 13:24:30.414172   59920 cli_runner.go:164] Run: docker container inspect no-preload-20220725131741-44543 --format={{.State.Status}}
	I0725 13:24:30.414226   59920 cli_runner.go:164] Run: docker container inspect no-preload-20220725131741-44543 --format={{.State.Status}}
	I0725 13:24:30.458052   59920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-20220725131741-44543
	I0725 13:24:30.458055   59920 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0725 13:24:30.572793   59920 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0725 13:24:30.579147   59920 addons.go:153] Setting addon default-storageclass=true in "no-preload-20220725131741-44543"
	I0725 13:24:30.592379   59920 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0725 13:24:30.613626   59920 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 13:24:30.646222   59920 node_ready.go:35] waiting up to 6m0s for node "no-preload-20220725131741-44543" to be "Ready" ...
	W0725 13:24:30.650686   59920 addons.go:162] addon default-storageclass should already be in state true
	I0725 13:24:30.650732   59920 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0725 13:24:30.708629   59920 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0725 13:24:30.653982   59920 node_ready.go:49] node "no-preload-20220725131741-44543" has status "Ready":"True"
	I0725 13:24:30.708646   59920 node_ready.go:38] duration metric: took 57.970336ms waiting for node "no-preload-20220725131741-44543" to be "Ready" ...
	I0725 13:24:30.687570   59920 host.go:66] Checking if "no-preload-20220725131741-44543" exists ...
	I0725 13:24:30.708657   59920 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 13:24:30.708685   59920 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 13:24:30.745654   59920 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0725 13:24:30.708708   59920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220725131741-44543
	I0725 13:24:30.709081   59920 cli_runner.go:164] Run: docker container inspect no-preload-20220725131741-44543 --format={{.State.Status}}
	I0725 13:24:30.715940   59920 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-qfmxw" in "kube-system" namespace to be "Ready" ...
	I0725 13:24:30.745705   59920 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 13:24:30.782639   59920 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0725 13:24:30.782656   59920 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0725 13:24:30.782710   59920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220725131741-44543
	I0725 13:24:30.782759   59920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220725131741-44543
	I0725 13:24:30.900399   59920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58795 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/no-preload-20220725131741-44543/id_rsa Username:docker}
	I0725 13:24:30.901115   59920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58795 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/no-preload-20220725131741-44543/id_rsa Username:docker}
	I0725 13:24:30.902059   59920 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 13:24:30.902074   59920 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 13:24:30.902161   59920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220725131741-44543
	I0725 13:24:30.904793   59920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58795 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/no-preload-20220725131741-44543/id_rsa Username:docker}
	I0725 13:24:30.984451   59920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58795 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/no-preload-20220725131741-44543/id_rsa Username:docker}
	I0725 13:24:31.086020   59920 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 13:24:31.102717   59920 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0725 13:24:31.102730   59920 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0725 13:24:31.111375   59920 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0725 13:24:31.111390   59920 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0725 13:24:31.192107   59920 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0725 13:24:31.192120   59920 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0725 13:24:31.215194   59920 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 13:24:31.291555   59920 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0725 13:24:31.291573   59920 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0725 13:24:31.298933   59920 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0725 13:24:31.298948   59920 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0725 13:24:31.381035   59920 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0725 13:24:31.381051   59920 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0725 13:24:31.390048   59920 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 13:24:31.390070   59920 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0725 13:24:31.405204   59920 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0725 13:24:31.405219   59920 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0725 13:24:31.497359   59920 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0725 13:24:31.497362   59920 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 13:24:31.497375   59920 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0725 13:24:31.591751   59920 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0725 13:24:31.591768   59920 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0725 13:24:31.593614   59920 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.135485755s)
	I0725 13:24:31.593659   59920 start.go:809] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0725 13:24:31.683293   59920 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0725 13:24:31.683308   59920 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0725 13:24:31.721561   59920 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0725 13:24:31.721575   59920 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0725 13:24:31.885159   59920 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0725 13:24:32.216124   59920 addons.go:383] Verifying addon metrics-server=true in "no-preload-20220725131741-44543"
	I0725 13:24:32.632247   59920 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0725 13:24:30.350962   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056388033s)
	I0725 13:24:32.851327   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:24:32.919793   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:24:32.950452   60183 logs.go:274] 0 containers: []
	W0725 13:24:32.950464   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:24:32.950519   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:24:32.978393   60183 logs.go:274] 0 containers: []
	W0725 13:24:32.978405   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:24:32.978461   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:24:33.008027   60183 logs.go:274] 0 containers: []
	W0725 13:24:33.008039   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:24:33.008095   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:24:33.038231   60183 logs.go:274] 0 containers: []
	W0725 13:24:33.038243   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:24:33.038297   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:24:33.068037   60183 logs.go:274] 0 containers: []
	W0725 13:24:33.068049   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:24:33.068108   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:24:33.098144   60183 logs.go:274] 0 containers: []
	W0725 13:24:33.098156   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:24:33.098219   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:24:33.131474   60183 logs.go:274] 0 containers: []
	W0725 13:24:33.131488   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:24:33.131551   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:24:33.163043   60183 logs.go:274] 0 containers: []
	W0725 13:24:33.163057   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:24:33.163064   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:24:33.163071   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:24:33.225128   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:24:33.225142   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:24:33.225148   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:24:33.240300   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:24:33.240316   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:24:32.673165   59920 addons.go:414] enableAddons completed in 2.300825743s
	I0725 13:24:32.805388   59920 pod_ready.go:102] pod "coredns-6d4b75cb6d-qfmxw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:24:34.805957   59920 pod_ready.go:102] pod "coredns-6d4b75cb6d-qfmxw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:24:35.303510   59920 pod_ready.go:97] error getting pod "coredns-6d4b75cb6d-qfmxw" in "kube-system" namespace (skipping!): pods "coredns-6d4b75cb6d-qfmxw" not found
	I0725 13:24:35.303527   59920 pod_ready.go:81] duration metric: took 4.520786298s waiting for pod "coredns-6d4b75cb6d-qfmxw" in "kube-system" namespace to be "Ready" ...
	E0725 13:24:35.303534   59920 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-6d4b75cb6d-qfmxw" in "kube-system" namespace (skipping!): pods "coredns-6d4b75cb6d-qfmxw" not found
	I0725 13:24:35.303539   59920 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-r68q6" in "kube-system" namespace to be "Ready" ...
	I0725 13:24:35.310951   59920 pod_ready.go:92] pod "coredns-6d4b75cb6d-r68q6" in "kube-system" namespace has status "Ready":"True"
	I0725 13:24:35.310962   59920 pod_ready.go:81] duration metric: took 7.417207ms waiting for pod "coredns-6d4b75cb6d-r68q6" in "kube-system" namespace to be "Ready" ...
	I0725 13:24:35.310968   59920 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-20220725131741-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:24:35.318019   59920 pod_ready.go:92] pod "etcd-no-preload-20220725131741-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:24:35.318038   59920 pod_ready.go:81] duration metric: took 7.06362ms waiting for pod "etcd-no-preload-20220725131741-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:24:35.318051   59920 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-20220725131741-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:24:35.324559   59920 pod_ready.go:92] pod "kube-apiserver-no-preload-20220725131741-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:24:35.324574   59920 pod_ready.go:81] duration metric: took 6.512119ms waiting for pod "kube-apiserver-no-preload-20220725131741-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:24:35.324587   59920 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-20220725131741-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:24:35.332274   59920 pod_ready.go:92] pod "kube-controller-manager-no-preload-20220725131741-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:24:35.332286   59920 pod_ready.go:81] duration metric: took 7.691522ms waiting for pod "kube-controller-manager-no-preload-20220725131741-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:24:35.332299   59920 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5xd86" in "kube-system" namespace to be "Ready" ...
	I0725 13:24:35.504459   59920 pod_ready.go:92] pod "kube-proxy-5xd86" in "kube-system" namespace has status "Ready":"True"
	I0725 13:24:35.504471   59920 pod_ready.go:81] duration metric: took 172.155311ms waiting for pod "kube-proxy-5xd86" in "kube-system" namespace to be "Ready" ...
	I0725 13:24:35.504480   59920 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-20220725131741-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:24:35.903818   59920 pod_ready.go:92] pod "kube-scheduler-no-preload-20220725131741-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:24:35.903828   59920 pod_ready.go:81] duration metric: took 399.331945ms waiting for pod "kube-scheduler-no-preload-20220725131741-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:24:35.903847   59920 pod_ready.go:38] duration metric: took 5.195020317s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 13:24:35.903878   59920 api_server.go:51] waiting for apiserver process to appear ...
	I0725 13:24:35.903926   59920 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:24:35.914798   59920 api_server.go:71] duration metric: took 5.542375251s to wait for apiserver process to appear ...
	I0725 13:24:35.914834   59920 api_server.go:87] waiting for apiserver healthz status ...
	I0725 13:24:35.914844   59920 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:58799/healthz ...
	I0725 13:24:35.921222   59920 api_server.go:266] https://127.0.0.1:58799/healthz returned 200:
	ok
	I0725 13:24:35.922647   59920 api_server.go:140] control plane version: v1.24.2
	I0725 13:24:35.922659   59920 api_server.go:130] duration metric: took 7.816862ms to wait for apiserver health ...
	I0725 13:24:35.922664   59920 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 13:24:36.106279   59920 system_pods.go:59] 8 kube-system pods found
	I0725 13:24:36.106293   59920 system_pods.go:61] "coredns-6d4b75cb6d-r68q6" [623858be-728c-4b5e-8e31-b5713757b87c] Running
	I0725 13:24:36.106297   59920 system_pods.go:61] "etcd-no-preload-20220725131741-44543" [85aead75-e56d-4567-b4b3-67e65f0996ad] Running
	I0725 13:24:36.106301   59920 system_pods.go:61] "kube-apiserver-no-preload-20220725131741-44543" [d465aaac-22eb-46d8-875a-0262cfd269c0] Running
	I0725 13:24:36.106304   59920 system_pods.go:61] "kube-controller-manager-no-preload-20220725131741-44543" [f2309e0f-bca3-409e-a2be-fc7577409b36] Running
	I0725 13:24:36.106307   59920 system_pods.go:61] "kube-proxy-5xd86" [f7413e53-4981-4223-bae2-7a94b1c41206] Running
	I0725 13:24:36.106317   59920 system_pods.go:61] "kube-scheduler-no-preload-20220725131741-44543" [255f59a4-6f75-45a9-8640-2bced1f641fd] Running
	I0725 13:24:36.106324   59920 system_pods.go:61] "metrics-server-5c6f97fb75-wx6t6" [3b1612e1-6629-4a77-bc5c-96599e1fbede] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 13:24:36.106330   59920 system_pods.go:61] "storage-provisioner" [1e217a5f-3b4f-491b-8ff0-b385e6032f65] Running
	I0725 13:24:36.106335   59920 system_pods.go:74] duration metric: took 183.662109ms to wait for pod list to return data ...
	I0725 13:24:36.106339   59920 default_sa.go:34] waiting for default service account to be created ...
	I0725 13:24:36.303371   59920 default_sa.go:45] found service account: "default"
	I0725 13:24:36.303384   59920 default_sa.go:55] duration metric: took 197.033995ms for default service account to be created ...
	I0725 13:24:36.303391   59920 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 13:24:36.506245   59920 system_pods.go:86] 8 kube-system pods found
	I0725 13:24:36.506263   59920 system_pods.go:89] "coredns-6d4b75cb6d-r68q6" [623858be-728c-4b5e-8e31-b5713757b87c] Running
	I0725 13:24:36.506268   59920 system_pods.go:89] "etcd-no-preload-20220725131741-44543" [85aead75-e56d-4567-b4b3-67e65f0996ad] Running
	I0725 13:24:36.506272   59920 system_pods.go:89] "kube-apiserver-no-preload-20220725131741-44543" [d465aaac-22eb-46d8-875a-0262cfd269c0] Running
	I0725 13:24:36.506276   59920 system_pods.go:89] "kube-controller-manager-no-preload-20220725131741-44543" [f2309e0f-bca3-409e-a2be-fc7577409b36] Running
	I0725 13:24:36.506280   59920 system_pods.go:89] "kube-proxy-5xd86" [f7413e53-4981-4223-bae2-7a94b1c41206] Running
	I0725 13:24:36.506284   59920 system_pods.go:89] "kube-scheduler-no-preload-20220725131741-44543" [255f59a4-6f75-45a9-8640-2bced1f641fd] Running
	I0725 13:24:36.506291   59920 system_pods.go:89] "metrics-server-5c6f97fb75-wx6t6" [3b1612e1-6629-4a77-bc5c-96599e1fbede] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 13:24:36.506296   59920 system_pods.go:89] "storage-provisioner" [1e217a5f-3b4f-491b-8ff0-b385e6032f65] Running
	I0725 13:24:36.506302   59920 system_pods.go:126] duration metric: took 202.90215ms to wait for k8s-apps to be running ...
	I0725 13:24:36.506308   59920 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 13:24:36.506359   59920 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 13:24:36.516963   59920 system_svc.go:56] duration metric: took 10.650541ms WaitForService to wait for kubelet.
	I0725 13:24:36.516979   59920 kubeadm.go:572] duration metric: took 6.144559272s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0725 13:24:36.516994   59920 node_conditions.go:102] verifying NodePressure condition ...
	I0725 13:24:36.703723   59920 node_conditions.go:122] node storage ephemeral capacity is 107077304Ki
	I0725 13:24:36.703737   59920 node_conditions.go:123] node cpu capacity is 6
	I0725 13:24:36.703744   59920 node_conditions.go:105] duration metric: took 186.741628ms to run NodePressure ...
	I0725 13:24:36.703754   59920 start.go:216] waiting for startup goroutines ...
	I0725 13:24:36.737087   59920 start.go:506] kubectl: 1.24.1, cluster: 1.24.2 (minor skew: 0)
	I0725 13:24:36.758866   59920 out.go:177] * Done! kubectl is now configured to use "no-preload-20220725131741-44543" cluster and "default" namespace by default
	I0725 13:24:35.304650   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.064263654s)
	I0725 13:24:35.304758   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:24:35.304765   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:24:35.359741   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:24:35.359783   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:24:37.873389   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:24:37.918418   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:24:37.955388   60183 logs.go:274] 0 containers: []
	W0725 13:24:37.955407   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:24:37.955466   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:24:37.996813   60183 logs.go:274] 0 containers: []
	W0725 13:24:37.996824   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:24:37.996887   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:24:38.029638   60183 logs.go:274] 0 containers: []
	W0725 13:24:38.029653   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:24:38.029717   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:24:38.063668   60183 logs.go:274] 0 containers: []
	W0725 13:24:38.063681   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:24:38.063734   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:24:38.097181   60183 logs.go:274] 0 containers: []
	W0725 13:24:38.097193   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:24:38.097248   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:24:38.128322   60183 logs.go:274] 0 containers: []
	W0725 13:24:38.128337   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:24:38.128423   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:24:38.161589   60183 logs.go:274] 0 containers: []
	W0725 13:24:38.161605   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:24:38.161667   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:24:38.199476   60183 logs.go:274] 0 containers: []
	W0725 13:24:38.199488   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:24:38.199495   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:24:38.199501   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:24:38.263856   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:24:38.263867   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:24:38.263874   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:24:38.278755   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:24:38.278771   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:24:40.336830   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05798511s)
	I0725 13:24:40.336946   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:24:40.336958   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:24:40.385712   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:24:40.385733   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:24:42.900882   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:24:42.917988   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:24:42.955271   60183 logs.go:274] 0 containers: []
	W0725 13:24:42.955286   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:24:42.955386   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:24:42.990842   60183 logs.go:274] 0 containers: []
	W0725 13:24:42.990861   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:24:42.990927   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:24:43.024751   60183 logs.go:274] 0 containers: []
	W0725 13:24:43.024763   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:24:43.024824   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:24:43.061278   60183 logs.go:274] 0 containers: []
	W0725 13:24:43.061296   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:24:43.061361   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:24:43.091254   60183 logs.go:274] 0 containers: []
	W0725 13:24:43.091266   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:24:43.091323   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:24:43.121299   60183 logs.go:274] 0 containers: []
	W0725 13:24:43.121311   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:24:43.121385   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:24:43.150795   60183 logs.go:274] 0 containers: []
	W0725 13:24:43.150808   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:24:43.150899   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:24:43.184239   60183 logs.go:274] 0 containers: []
	W0725 13:24:43.184251   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:24:43.184258   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:24:43.184265   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:24:43.201029   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:24:43.201043   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:24:45.254970   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05385567s)
	I0725 13:24:45.255075   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:24:45.255081   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:24:45.294400   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:24:45.294415   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:24:45.306088   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:24:45.306101   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:24:45.358898   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:24:47.859143   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:24:47.918290   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:24:47.948745   60183 logs.go:274] 0 containers: []
	W0725 13:24:47.948757   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:24:47.948813   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:24:47.978054   60183 logs.go:274] 0 containers: []
	W0725 13:24:47.978065   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:24:47.978125   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:24:48.006969   60183 logs.go:274] 0 containers: []
	W0725 13:24:48.006982   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:24:48.007039   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:24:48.037417   60183 logs.go:274] 0 containers: []
	W0725 13:24:48.037433   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:24:48.037509   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:24:48.067050   60183 logs.go:274] 0 containers: []
	W0725 13:24:48.067063   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:24:48.067118   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:24:48.095883   60183 logs.go:274] 0 containers: []
	W0725 13:24:48.095896   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:24:48.095950   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:24:48.123973   60183 logs.go:274] 0 containers: []
	W0725 13:24:48.123985   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:24:48.124042   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:24:48.152316   60183 logs.go:274] 0 containers: []
	W0725 13:24:48.152332   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:24:48.152341   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:24:48.152349   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:24:48.194780   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:24:48.194796   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:24:48.207031   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:24:48.207044   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:24:48.260819   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:24:48.260831   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:24:48.260839   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:24:48.274383   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:24:48.274397   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:24:50.326332   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051862489s)
	I0725 13:24:52.827101   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:24:52.918437   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:24:52.951150   60183 logs.go:274] 0 containers: []
	W0725 13:24:52.951162   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:24:52.951220   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:24:52.985739   60183 logs.go:274] 0 containers: []
	W0725 13:24:52.985753   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:24:52.985815   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:24:53.016602   60183 logs.go:274] 0 containers: []
	W0725 13:24:53.016612   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:24:53.016659   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:24:53.046448   60183 logs.go:274] 0 containers: []
	W0725 13:24:53.046459   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:24:53.046517   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:24:53.078374   60183 logs.go:274] 0 containers: []
	W0725 13:24:53.078390   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:24:53.078466   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:24:53.123048   60183 logs.go:274] 0 containers: []
	W0725 13:24:53.123061   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:24:53.123123   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:24:53.154579   60183 logs.go:274] 0 containers: []
	W0725 13:24:53.154591   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:24:53.154646   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:24:53.195527   60183 logs.go:274] 0 containers: []
	W0725 13:24:53.195542   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:24:53.195551   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:24:53.195559   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:24:53.241474   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:24:53.241487   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:24:53.253883   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:24:53.253895   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:24:53.311986   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:24:53.312000   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:24:53.312008   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:24:53.327743   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:24:53.327764   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:24:55.393400   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.065560615s)
	I0725 13:24:57.895862   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:24:57.919394   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:24:57.951377   60183 logs.go:274] 0 containers: []
	W0725 13:24:57.951389   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:24:57.951444   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:24:57.979788   60183 logs.go:274] 0 containers: []
	W0725 13:24:57.979801   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:24:57.979860   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:24:58.008898   60183 logs.go:274] 0 containers: []
	W0725 13:24:58.008911   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:24:58.008967   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:24:58.037016   60183 logs.go:274] 0 containers: []
	W0725 13:24:58.037029   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:24:58.037089   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:24:58.066009   60183 logs.go:274] 0 containers: []
	W0725 13:24:58.066021   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:24:58.066079   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:24:58.093711   60183 logs.go:274] 0 containers: []
	W0725 13:24:58.093724   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:24:58.093788   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:24:58.123557   60183 logs.go:274] 0 containers: []
	W0725 13:24:58.123570   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:24:58.123626   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:24:58.151991   60183 logs.go:274] 0 containers: []
	W0725 13:24:58.152005   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:24:58.152011   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:24:58.152018   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:24:58.191731   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:24:58.191751   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:24:58.205346   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:24:58.205362   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:24:58.258841   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:24:58.258853   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:24:58.258859   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:24:58.272311   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:24:58.272323   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:25:00.327133   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054738791s)
	I0725 13:25:02.829132   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:25:02.920662   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:25:02.950188   60183 logs.go:274] 0 containers: []
	W0725 13:25:02.950201   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:25:02.950260   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:25:02.978580   60183 logs.go:274] 0 containers: []
	W0725 13:25:02.978592   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:25:02.978646   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:25:03.006563   60183 logs.go:274] 0 containers: []
	W0725 13:25:03.006576   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:25:03.006629   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:25:03.033788   60183 logs.go:274] 0 containers: []
	W0725 13:25:03.033801   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:25:03.033855   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:25:03.062179   60183 logs.go:274] 0 containers: []
	W0725 13:25:03.062191   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:25:03.062245   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:25:03.091169   60183 logs.go:274] 0 containers: []
	W0725 13:25:03.091189   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:25:03.091248   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:25:03.120134   60183 logs.go:274] 0 containers: []
	W0725 13:25:03.120147   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:25:03.120204   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:25:03.148569   60183 logs.go:274] 0 containers: []
	W0725 13:25:03.148582   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:25:03.148588   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:25:03.148595   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:25:05.206723   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.058055845s)
	I0725 13:25:05.206827   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:25:05.206834   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:25:05.244693   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:25:05.244707   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:25:05.256822   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:25:05.256833   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:25:05.308516   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:25:05.308531   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:25:05.308543   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:25:07.823907   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:25:07.918681   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:25:07.951167   60183 logs.go:274] 0 containers: []
	W0725 13:25:07.951179   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:25:07.951234   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:25:07.979414   60183 logs.go:274] 0 containers: []
	W0725 13:25:07.979427   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:25:07.979484   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:25:08.009108   60183 logs.go:274] 0 containers: []
	W0725 13:25:08.009120   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:25:08.009178   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:25:08.038053   60183 logs.go:274] 0 containers: []
	W0725 13:25:08.038070   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:25:08.038126   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:25:08.066112   60183 logs.go:274] 0 containers: []
	W0725 13:25:08.066124   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:25:08.066178   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:25:08.094804   60183 logs.go:274] 0 containers: []
	W0725 13:25:08.094817   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:25:08.094874   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:25:08.123943   60183 logs.go:274] 0 containers: []
	W0725 13:25:08.123955   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:25:08.124011   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:25:08.153447   60183 logs.go:274] 0 containers: []
	W0725 13:25:08.153460   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:25:08.153467   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:25:08.153474   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:25:10.205133   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051587517s)
	I0725 13:25:10.205247   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:25:10.205256   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:25:10.244085   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:25:10.244097   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:25:10.256079   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:25:10.256095   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:25:10.307417   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:25:10.307428   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:25:10.307435   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:25:12.823093   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:25:12.920941   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:25:12.952408   60183 logs.go:274] 0 containers: []
	W0725 13:25:12.952420   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:25:12.952476   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:25:12.981252   60183 logs.go:274] 0 containers: []
	W0725 13:25:12.981269   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:25:12.981333   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:25:13.010436   60183 logs.go:274] 0 containers: []
	W0725 13:25:13.010447   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:25:13.010511   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:25:13.038121   60183 logs.go:274] 0 containers: []
	W0725 13:25:13.038141   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:25:13.038208   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:25:13.068013   60183 logs.go:274] 0 containers: []
	W0725 13:25:13.068025   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:25:13.068084   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:25:13.098322   60183 logs.go:274] 0 containers: []
	W0725 13:25:13.098334   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:25:13.098389   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:25:13.128619   60183 logs.go:274] 0 containers: []
	W0725 13:25:13.128634   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:25:13.128701   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:25:13.157149   60183 logs.go:274] 0 containers: []
	W0725 13:25:13.157166   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:25:13.157179   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:25:13.157190   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:25:13.197722   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:25:13.197738   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:25:13.211125   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:25:13.211147   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:25:13.263333   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:25:13.263343   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:25:13.263350   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:25:13.276992   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:25:13.277004   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:25:15.333288   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056210609s)
	I0725 13:25:17.835729   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:25:17.921071   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:25:17.952398   60183 logs.go:274] 0 containers: []
	W0725 13:25:17.952411   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:25:17.952466   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:25:17.983512   60183 logs.go:274] 0 containers: []
	W0725 13:25:17.983524   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:25:17.983579   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:25:18.012155   60183 logs.go:274] 0 containers: []
	W0725 13:25:18.012166   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:25:18.012223   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:25:18.041437   60183 logs.go:274] 0 containers: []
	W0725 13:25:18.041450   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:25:18.041509   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:25:18.071064   60183 logs.go:274] 0 containers: []
	W0725 13:25:18.071076   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:25:18.071133   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:25:18.100563   60183 logs.go:274] 0 containers: []
	W0725 13:25:18.100576   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:25:18.100632   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:25:18.130038   60183 logs.go:274] 0 containers: []
	W0725 13:25:18.130065   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:25:18.130222   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:25:18.160243   60183 logs.go:274] 0 containers: []
	W0725 13:25:18.160255   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:25:18.160262   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:25:18.160270   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:25:20.214840   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054499324s)
	I0725 13:25:20.214949   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:25:20.214957   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:25:20.254381   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:25:20.254393   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:25:20.265948   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:25:20.265960   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:25:20.317418   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:25:20.317429   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:25:20.317435   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:25:22.833394   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:25:22.919747   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:25:22.949763   60183 logs.go:274] 0 containers: []
	W0725 13:25:22.949775   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:25:22.949833   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:25:22.979326   60183 logs.go:274] 0 containers: []
	W0725 13:25:22.979338   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:25:22.979394   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:25:23.008775   60183 logs.go:274] 0 containers: []
	W0725 13:25:23.008789   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:25:23.008847   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:25:23.038068   60183 logs.go:274] 0 containers: []
	W0725 13:25:23.038098   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:25:23.038155   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:25:23.066885   60183 logs.go:274] 0 containers: []
	W0725 13:25:23.066899   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:25:23.066948   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:25:23.095779   60183 logs.go:274] 0 containers: []
	W0725 13:25:23.095792   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:25:23.095847   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:25:23.124721   60183 logs.go:274] 0 containers: []
	W0725 13:25:23.124733   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:25:23.124795   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:25:23.154730   60183 logs.go:274] 0 containers: []
	W0725 13:25:23.154742   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:25:23.154749   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:25:23.154757   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:25:23.194256   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:25:23.194269   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:25:23.205440   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:25:23.205452   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:25:23.257296   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:25:23.257307   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:25:23.257314   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:25:23.270751   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:25:23.270762   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:25:25.325770   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054934272s)
	I0725 13:25:27.826256   60183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:25:27.920179   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:25:27.950490   60183 logs.go:274] 0 containers: []
	W0725 13:25:27.950501   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:25:27.950549   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:25:27.983247   60183 logs.go:274] 0 containers: []
	W0725 13:25:27.983258   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:25:27.983323   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:25:28.019768   60183 logs.go:274] 0 containers: []
	W0725 13:25:28.019777   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:25:28.019833   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:25:28.052617   60183 logs.go:274] 0 containers: []
	W0725 13:25:28.052630   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:25:28.052685   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:25:28.082546   60183 logs.go:274] 0 containers: []
	W0725 13:25:28.082559   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:25:28.082614   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:25:28.111799   60183 logs.go:274] 0 containers: []
	W0725 13:25:28.111814   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:25:28.111884   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:25:28.142096   60183 logs.go:274] 0 containers: []
	W0725 13:25:28.142112   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:25:28.142180   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:25:28.173212   60183 logs.go:274] 0 containers: []
	W0725 13:25:28.173223   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:25:28.173230   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:25:28.173237   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:25:28.213670   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:25:28.213689   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:25:28.228963   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:25:28.228980   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:25:28.292093   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:25:28.292105   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:25:28.292112   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0725 13:25:28.305882   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:25:28.305895   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	
	* 
	* ==> Docker <==
	* -- Logs begin at Mon 2022-07-25 20:19:38 UTC, end at Mon 2022-07-25 20:25:33 UTC. --
	Jul 25 20:24:08 no-preload-20220725131741-44543 dockerd[553]: time="2022-07-25T20:24:08.224301762Z" level=info msg="ignoring event" container=a54f833673b213d748f59b84c4014fcc2c1857b4b2bdb4bce11e4534adbb368e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:24:08 no-preload-20220725131741-44543 dockerd[553]: time="2022-07-25T20:24:08.357136065Z" level=info msg="ignoring event" container=ada121ccb73f129400150bbeacf725a40d3c99ba3cf415017d9927e1f1d5fb7a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:24:08 no-preload-20220725131741-44543 dockerd[553]: time="2022-07-25T20:24:08.427254251Z" level=info msg="ignoring event" container=b6941f2eb3b4c450155e7e8f61ee644ead12e565c10ff46cd996e35a8a9c7e84 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:24:08 no-preload-20220725131741-44543 dockerd[553]: time="2022-07-25T20:24:08.495439137Z" level=info msg="ignoring event" container=2796da19f858e294e8a5edaa43356910aa75d8569bc4fbb3fa118d0ba216b424 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:24:08 no-preload-20220725131741-44543 dockerd[553]: time="2022-07-25T20:24:08.561367074Z" level=info msg="ignoring event" container=10b9793cb27de25c597b7554509affe40111afc5aa4786e3aad713355b66a863 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:24:08 no-preload-20220725131741-44543 dockerd[553]: time="2022-07-25T20:24:08.627179417Z" level=info msg="ignoring event" container=d08baac5363c99937388bc7174e837634b28efbb152aaa221c2016900097d37e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:24:08 no-preload-20220725131741-44543 dockerd[553]: time="2022-07-25T20:24:08.713055426Z" level=info msg="ignoring event" container=28a04e4f22fcc8248bc629f2c06254b85833057d8ecab7352e7e0f98f8dfb7eb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:24:08 no-preload-20220725131741-44543 dockerd[553]: time="2022-07-25T20:24:08.778087802Z" level=info msg="ignoring event" container=71cc07c20f5d340e2ab9306aa7bef2e5f75a32c093d8b7f72d7fd869e7489723 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:24:08 no-preload-20220725131741-44543 dockerd[553]: time="2022-07-25T20:24:08.922328815Z" level=info msg="ignoring event" container=17b52d03f3f1ed558d9fe8ae12d2983b1fbc735c16d4a3b00ce8f9b5bb41d53e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:24:31 no-preload-20220725131741-44543 dockerd[553]: time="2022-07-25T20:24:31.692632872Z" level=info msg="ignoring event" container=168ab788ba30fb7d38d25041aa39f755f88b89fb9b2ed7e72ee38e0920941301 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:24:32 no-preload-20220725131741-44543 dockerd[553]: time="2022-07-25T20:24:32.852655184Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 20:24:32 no-preload-20220725131741-44543 dockerd[553]: time="2022-07-25T20:24:32.852999669Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 20:24:32 no-preload-20220725131741-44543 dockerd[553]: time="2022-07-25T20:24:32.854193004Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 20:24:33 no-preload-20220725131741-44543 dockerd[553]: time="2022-07-25T20:24:33.642896202Z" level=warning msg="reference for unknown type: " digest="sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3" remote="docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3"
	Jul 25 20:24:38 no-preload-20220725131741-44543 dockerd[553]: time="2022-07-25T20:24:38.858900200Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Jul 25 20:24:39 no-preload-20220725131741-44543 dockerd[553]: time="2022-07-25T20:24:39.065440747Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Jul 25 20:24:42 no-preload-20220725131741-44543 dockerd[553]: time="2022-07-25T20:24:42.362706207Z" level=info msg="ignoring event" container=7a4002c356f64147127b14e01a54fbf3ec3edd4fbd572a9d2f5aba18668469f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:24:43 no-preload-20220725131741-44543 dockerd[553]: time="2022-07-25T20:24:43.066702038Z" level=info msg="ignoring event" container=288472e032e6587fac9ce9e840fdd6e5207962b8fdef0b7a47ae511ad8dbc6da module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:24:46 no-preload-20220725131741-44543 dockerd[553]: time="2022-07-25T20:24:46.554301246Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 20:24:46 no-preload-20220725131741-44543 dockerd[553]: time="2022-07-25T20:24:46.554437296Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 20:24:46 no-preload-20220725131741-44543 dockerd[553]: time="2022-07-25T20:24:46.555667365Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 20:25:30 no-preload-20220725131741-44543 dockerd[553]: time="2022-07-25T20:25:30.200579761Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 20:25:30 no-preload-20220725131741-44543 dockerd[553]: time="2022-07-25T20:25:30.200625833Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 20:25:30 no-preload-20220725131741-44543 dockerd[553]: time="2022-07-25T20:25:30.242929190Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 20:25:30 no-preload-20220725131741-44543 dockerd[553]: time="2022-07-25T20:25:30.897150305Z" level=info msg="ignoring event" container=a1e7968be2a544b1fda1e01468986a02e4fc9dd977e42971e3113cdc935bb381 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	a1e7968be2a54       a90209bb39e3d                                                                                    3 seconds ago        Exited              dashboard-metrics-scraper   2                   287eed0e4e275
	57bf001902d14       kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3   55 seconds ago       Running             kubernetes-dashboard        0                   682693f29916d
	4ff08338fbe99       6e38f40d628db                                                                                    About a minute ago   Running             storage-provisioner         0                   c51426c5d4dd9
	c876d04d85189       a4ca41631cc7a                                                                                    About a minute ago   Running             coredns                     0                   6c2a6be0f69c8
	70d956cdef7ed       a634548d10b03                                                                                    About a minute ago   Running             kube-proxy                  0                   46b7162b59dd0
	4186c25f0122e       5d725196c1f47                                                                                    About a minute ago   Running             kube-scheduler              0                   874c936c512f4
	c5a0606c13196       d3377ffb7177c                                                                                    About a minute ago   Running             kube-apiserver              0                   80f3480abb767
	5fb6f0196adc0       34cdf99b1bb3b                                                                                    About a minute ago   Running             kube-controller-manager     0                   f80ed10fef334
	9db6ed85741b4       aebe758cef4cd                                                                                    About a minute ago   Running             etcd                        0                   03d53d1070abb
	
	* 
	* ==> coredns [c876d04d8518] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-20220725131741-44543
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-20220725131741-44543
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a5b59bcfc16aadb787d3d4f0635e06172b98dce6
	                    minikube.k8s.io/name=no-preload-20220725131741-44543
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_07_25T13_24_17_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 25 Jul 2022 20:24:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-20220725131741-44543
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 25 Jul 2022 20:25:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 25 Jul 2022 20:25:26 +0000   Mon, 25 Jul 2022 20:24:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 25 Jul 2022 20:25:26 +0000   Mon, 25 Jul 2022 20:24:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 25 Jul 2022 20:25:26 +0000   Mon, 25 Jul 2022 20:24:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 25 Jul 2022 20:25:26 +0000   Mon, 25 Jul 2022 20:24:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-20220725131741-44543
	Capacity:
	  cpu:                6
	  ephemeral-storage:  107077304Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  107077304Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 855c6c72c86b4657b3d8c3c774fd7e1d
	  System UUID:                d6898a0d-e94b-4236-8262-d80df4c73be9
	  Boot ID:                    f0b8333d-7eba-4eec-b9ed-6046a2426c0d
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.2
	  Kube-Proxy Version:         v1.24.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-r68q6                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     63s
	  kube-system                 etcd-no-preload-20220725131741-44543                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         76s
	  kube-system                 kube-apiserver-no-preload-20220725131741-44543             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 kube-controller-manager-no-preload-20220725131741-44543    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 kube-proxy-5xd86                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	  kube-system                 kube-scheduler-no-preload-20220725131741-44543             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 metrics-server-5c6f97fb75-wx6t6                            100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         61s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kubernetes-dashboard        dashboard-metrics-scraper-dffd48c4c-zqmnn                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	  kubernetes-dashboard        kubernetes-dashboard-5fd5574d9f-mhjvb                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 62s                kube-proxy       
	  Normal  Starting                 83s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  83s (x5 over 83s)  kubelet          Node no-preload-20220725131741-44543 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    83s (x5 over 83s)  kubelet          Node no-preload-20220725131741-44543 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     83s (x4 over 83s)  kubelet          Node no-preload-20220725131741-44543 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  83s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 76s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  76s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  76s                kubelet          Node no-preload-20220725131741-44543 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    76s                kubelet          Node no-preload-20220725131741-44543 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     76s                kubelet          Node no-preload-20220725131741-44543 status is now: NodeHasSufficientPID
	  Normal  NodeReady                76s                kubelet          Node no-preload-20220725131741-44543 status is now: NodeReady
	  Normal  RegisteredNode           64s                node-controller  Node no-preload-20220725131741-44543 event: Registered Node no-preload-20220725131741-44543 in Controller
	  Normal  Starting                 7s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s                 kubelet          Node no-preload-20220725131741-44543 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s                 kubelet          Node no-preload-20220725131741-44543 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s                 kubelet          Node no-preload-20220725131741-44543 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7s                 kubelet          Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [9db6ed85741b] <==
	* {"level":"info","ts":"2022-07-25T20:24:11.348Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-07-25T20:24:11.348Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-25T20:24:11.349Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-25T20:24:11.349Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-07-25T20:24:11.349Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-07-25T20:24:12.339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2022-07-25T20:24:12.339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-07-25T20:24:12.339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2022-07-25T20:24:12.339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2022-07-25T20:24:12.339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-07-25T20:24:12.339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2022-07-25T20:24:12.339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-07-25T20:24:12.339Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T20:24:12.340Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T20:24:12.340Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T20:24:12.340Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:no-preload-20220725131741-44543 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-25T20:24:12.340Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T20:24:12.340Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-25T20:24:12.340Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-25T20:24:12.341Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-07-25T20:24:12.341Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2022-07-25T20:24:12.342Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-25T20:24:12.342Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2022-07-25T20:25:30.472Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"104.648798ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-07-25T20:25:30.472Z","caller":"traceutil/trace.go:171","msg":"trace[460097207] range","detail":"{range_begin:/registry/cronjobs/; range_end:/registry/cronjobs0; response_count:0; response_revision:574; }","duration":"105.05786ms","start":"2022-07-25T20:25:30.367Z","end":"2022-07-25T20:25:30.472Z","steps":["trace[460097207] 'count revisions from in-memory index tree'  (duration: 104.589396ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  20:25:34 up  1:06,  0 users,  load average: 2.08, 1.45, 1.35
	Linux no-preload-20220725131741-44543 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [c5a0606c1319] <==
	* I0725 20:24:16.349393       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0725 20:24:17.410900       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0725 20:24:17.437007       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0725 20:24:17.444848       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0725 20:24:17.507884       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0725 20:24:29.783879       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0725 20:24:29.883999       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0725 20:24:31.698449       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0725 20:24:32.207744       1 alloc.go:327] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.99.4.98]
	I0725 20:24:32.592497       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.104.147.56]
	I0725 20:24:32.604607       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.111.173.15]
	W0725 20:24:33.094059       1 handler_proxy.go:102] no RequestInfo found in the context
	W0725 20:24:33.094066       1 handler_proxy.go:102] no RequestInfo found in the context
	E0725 20:24:33.094083       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0725 20:24:33.094088       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0725 20:24:33.094100       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0725 20:24:33.095317       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0725 20:25:33.053565       1 handler_proxy.go:102] no RequestInfo found in the context
	E0725 20:25:33.053602       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0725 20:25:33.053640       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0725 20:25:33.054818       1 handler_proxy.go:102] no RequestInfo found in the context
	E0725 20:25:33.054917       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0725 20:25:33.054947       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [5fb6f0196adc] <==
	* I0725 20:24:30.204059       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-6d4b75cb6d-qfmxw"
	I0725 20:24:32.089801       1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-5c6f97fb75 to 1"
	I0725 20:24:32.094114       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-5c6f97fb75-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0725 20:24:32.097509       1 replica_set.go:550] sync "kube-system/metrics-server-5c6f97fb75" failed with pods "metrics-server-5c6f97fb75-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0725 20:24:32.101216       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c6f97fb75-wx6t6"
	I0725 20:24:32.316205       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-dffd48c4c to 1"
	I0725 20:24:32.324575       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0725 20:24:32.327617       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0725 20:24:32.382342       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-5fd5574d9f to 1"
	E0725 20:24:32.383914       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0725 20:24:32.384181       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0725 20:24:32.386810       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0725 20:24:32.388801       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0725 20:24:32.388840       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0725 20:24:32.392335       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0725 20:24:32.397081       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0725 20:24:32.397094       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0725 20:24:32.399687       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0725 20:24:32.399750       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0725 20:24:32.402460       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0725 20:24:32.402512       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0725 20:24:32.450188       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-dffd48c4c-zqmnn"
	I0725 20:24:32.450221       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5fd5574d9f-mhjvb"
	E0725 20:25:26.285965       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0725 20:25:26.350700       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [70d956cdef7e] <==
	* I0725 20:24:31.513235       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0725 20:24:31.513294       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0725 20:24:31.513316       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0725 20:24:31.694845       1 server_others.go:206] "Using iptables Proxier"
	I0725 20:24:31.694917       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0725 20:24:31.694929       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0725 20:24:31.694955       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0725 20:24:31.694986       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0725 20:24:31.695315       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0725 20:24:31.695514       1 server.go:661] "Version info" version="v1.24.2"
	I0725 20:24:31.695531       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 20:24:31.696254       1 config.go:317] "Starting service config controller"
	I0725 20:24:31.696285       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0725 20:24:31.696299       1 config.go:226] "Starting endpoint slice config controller"
	I0725 20:24:31.696302       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0725 20:24:31.696635       1 config.go:444] "Starting node config controller"
	I0725 20:24:31.696661       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0725 20:24:31.796425       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0725 20:24:31.796478       1 shared_informer.go:262] Caches are synced for service config
	I0725 20:24:31.796814       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [4186c25f0122] <==
	* W0725 20:24:15.147386       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0725 20:24:15.147437       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0725 20:24:15.182529       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0725 20:24:15.182615       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0725 20:24:15.204059       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0725 20:24:15.204096       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0725 20:24:15.214008       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0725 20:24:15.214047       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0725 20:24:15.217764       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0725 20:24:15.217806       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0725 20:24:15.290192       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0725 20:24:15.290327       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0725 20:24:15.323830       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0725 20:24:15.323872       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0725 20:24:15.356618       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0725 20:24:15.356657       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0725 20:24:15.388624       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0725 20:24:15.388661       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0725 20:24:15.510560       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0725 20:24:15.510647       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0725 20:24:15.513730       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0725 20:24:15.513792       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0725 20:24:15.514218       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0725 20:24:15.514250       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0725 20:24:18.114400       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2022-07-25 20:19:38 UTC, end at Mon 2022-07-25 20:25:34 UTC. --
	Jul 25 20:25:27 no-preload-20220725131741-44543 kubelet[9627]: I0725 20:25:27.746695    9627 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkc4l\" (UniqueName: \"kubernetes.io/projected/623858be-728c-4b5e-8e31-b5713757b87c-kube-api-access-hkc4l\") pod \"coredns-6d4b75cb6d-r68q6\" (UID: \"623858be-728c-4b5e-8e31-b5713757b87c\") " pod="kube-system/coredns-6d4b75cb6d-r68q6"
	Jul 25 20:25:27 no-preload-20220725131741-44543 kubelet[9627]: I0725 20:25:27.746753    9627 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxrt5\" (UniqueName: \"kubernetes.io/projected/f7413e53-4981-4223-bae2-7a94b1c41206-kube-api-access-jxrt5\") pod \"kube-proxy-5xd86\" (UID: \"f7413e53-4981-4223-bae2-7a94b1c41206\") " pod="kube-system/kube-proxy-5xd86"
	Jul 25 20:25:27 no-preload-20220725131741-44543 kubelet[9627]: I0725 20:25:27.746862    9627 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1e217a5f-3b4f-491b-8ff0-b385e6032f65-tmp\") pod \"storage-provisioner\" (UID: \"1e217a5f-3b4f-491b-8ff0-b385e6032f65\") " pod="kube-system/storage-provisioner"
	Jul 25 20:25:27 no-preload-20220725131741-44543 kubelet[9627]: I0725 20:25:27.746919    9627 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b3d92a3d-a9e6-4310-865a-8f9cb6d82035-tmp-volume\") pod \"kubernetes-dashboard-5fd5574d9f-mhjvb\" (UID: \"b3d92a3d-a9e6-4310-865a-8f9cb6d82035\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-mhjvb"
	Jul 25 20:25:27 no-preload-20220725131741-44543 kubelet[9627]: I0725 20:25:27.746990    9627 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3b1612e1-6629-4a77-bc5c-96599e1fbede-tmp-dir\") pod \"metrics-server-5c6f97fb75-wx6t6\" (UID: \"3b1612e1-6629-4a77-bc5c-96599e1fbede\") " pod="kube-system/metrics-server-5c6f97fb75-wx6t6"
	Jul 25 20:25:27 no-preload-20220725131741-44543 kubelet[9627]: I0725 20:25:27.747046    9627 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7413e53-4981-4223-bae2-7a94b1c41206-xtables-lock\") pod \"kube-proxy-5xd86\" (UID: \"f7413e53-4981-4223-bae2-7a94b1c41206\") " pod="kube-system/kube-proxy-5xd86"
	Jul 25 20:25:27 no-preload-20220725131741-44543 kubelet[9627]: I0725 20:25:27.747133    9627 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7dd09b51-1f7c-4726-8129-946ccb611d60-tmp-volume\") pod \"dashboard-metrics-scraper-dffd48c4c-zqmnn\" (UID: \"7dd09b51-1f7c-4726-8129-946ccb611d60\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-zqmnn"
	Jul 25 20:25:27 no-preload-20220725131741-44543 kubelet[9627]: I0725 20:25:27.747200    9627 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgs5z\" (UniqueName: \"kubernetes.io/projected/b3d92a3d-a9e6-4310-865a-8f9cb6d82035-kube-api-access-rgs5z\") pod \"kubernetes-dashboard-5fd5574d9f-mhjvb\" (UID: \"b3d92a3d-a9e6-4310-865a-8f9cb6d82035\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-mhjvb"
	Jul 25 20:25:27 no-preload-20220725131741-44543 kubelet[9627]: I0725 20:25:27.747278    9627 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f7413e53-4981-4223-bae2-7a94b1c41206-kube-proxy\") pod \"kube-proxy-5xd86\" (UID: \"f7413e53-4981-4223-bae2-7a94b1c41206\") " pod="kube-system/kube-proxy-5xd86"
	Jul 25 20:25:27 no-preload-20220725131741-44543 kubelet[9627]: I0725 20:25:27.747291    9627 reconciler.go:157] "Reconciler: start to sync state"
	Jul 25 20:25:28 no-preload-20220725131741-44543 kubelet[9627]: I0725 20:25:28.900287    9627 request.go:601] Waited for 1.133917224s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Jul 25 20:25:28 no-preload-20220725131741-44543 kubelet[9627]: E0725 20:25:28.905615    9627 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-no-preload-20220725131741-44543\" already exists" pod="kube-system/kube-controller-manager-no-preload-20220725131741-44543"
	Jul 25 20:25:29 no-preload-20220725131741-44543 kubelet[9627]: E0725 20:25:29.103966    9627 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-no-preload-20220725131741-44543\" already exists" pod="kube-system/kube-scheduler-no-preload-20220725131741-44543"
	Jul 25 20:25:29 no-preload-20220725131741-44543 kubelet[9627]: E0725 20:25:29.320710    9627 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-no-preload-20220725131741-44543\" already exists" pod="kube-system/etcd-no-preload-20220725131741-44543"
	Jul 25 20:25:29 no-preload-20220725131741-44543 kubelet[9627]: E0725 20:25:29.535278    9627 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-no-preload-20220725131741-44543\" already exists" pod="kube-system/kube-apiserver-no-preload-20220725131741-44543"
	Jul 25 20:25:30 no-preload-20220725131741-44543 kubelet[9627]: E0725 20:25:30.244050    9627 remote_image.go:218] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jul 25 20:25:30 no-preload-20220725131741-44543 kubelet[9627]: E0725 20:25:30.244127    9627 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jul 25 20:25:30 no-preload-20220725131741-44543 kubelet[9627]: E0725 20:25:30.244269    9627 kuberuntime_manager.go:905] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-lpp9z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHa
ndler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:
[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-5c6f97fb75-wx6t6_kube-system(3b1612e1-6629-4a77-bc5c-96599e1fbede): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Jul 25 20:25:30 no-preload-20220725131741-44543 kubelet[9627]: E0725 20:25:30.244303    9627 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host\"" pod="kube-system/metrics-server-5c6f97fb75-wx6t6" podUID=3b1612e1-6629-4a77-bc5c-96599e1fbede
	Jul 25 20:25:30 no-preload-20220725131741-44543 kubelet[9627]: I0725 20:25:30.705449    9627 scope.go:110] "RemoveContainer" containerID="288472e032e6587fac9ce9e840fdd6e5207962b8fdef0b7a47ae511ad8dbc6da"
	Jul 25 20:25:31 no-preload-20220725131741-44543 kubelet[9627]: I0725 20:25:31.792101    9627 scope.go:110] "RemoveContainer" containerID="288472e032e6587fac9ce9e840fdd6e5207962b8fdef0b7a47ae511ad8dbc6da"
	Jul 25 20:25:31 no-preload-20220725131741-44543 kubelet[9627]: I0725 20:25:31.792306    9627 scope.go:110] "RemoveContainer" containerID="a1e7968be2a544b1fda1e01468986a02e4fc9dd977e42971e3113cdc935bb381"
	Jul 25 20:25:31 no-preload-20220725131741-44543 kubelet[9627]: E0725 20:25:31.792455    9627 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-dffd48c4c-zqmnn_kubernetes-dashboard(7dd09b51-1f7c-4726-8129-946ccb611d60)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-zqmnn" podUID=7dd09b51-1f7c-4726-8129-946ccb611d60
	Jul 25 20:25:32 no-preload-20220725131741-44543 kubelet[9627]: I0725 20:25:32.800873    9627 scope.go:110] "RemoveContainer" containerID="a1e7968be2a544b1fda1e01468986a02e4fc9dd977e42971e3113cdc935bb381"
	Jul 25 20:25:32 no-preload-20220725131741-44543 kubelet[9627]: E0725 20:25:32.801048    9627 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-dffd48c4c-zqmnn_kubernetes-dashboard(7dd09b51-1f7c-4726-8129-946ccb611d60)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-zqmnn" podUID=7dd09b51-1f7c-4726-8129-946ccb611d60
	
	* 
	* ==> kubernetes-dashboard [57bf001902d1] <==
	* 2022/07/25 20:24:38 Using namespace: kubernetes-dashboard
	2022/07/25 20:24:38 Using in-cluster config to connect to apiserver
	2022/07/25 20:24:38 Using secret token for csrf signing
	2022/07/25 20:24:38 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/07/25 20:24:38 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/07/25 20:24:38 Successful initial request to the apiserver, version: v1.24.2
	2022/07/25 20:24:38 Generating JWE encryption key
	2022/07/25 20:24:38 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/07/25 20:24:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/07/25 20:24:38 Initializing JWE encryption key from synchronized object
	2022/07/25 20:24:38 Creating in-cluster Sidecar client
	2022/07/25 20:24:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/07/25 20:24:38 Serving insecurely on HTTP port: 9090
	2022/07/25 20:25:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/07/25 20:24:38 Starting overwatch
	
	* 
	* ==> storage-provisioner [4ff08338fbe9] <==
	* I0725 20:24:32.692186       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0725 20:24:32.701939       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0725 20:24:32.702010       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0725 20:24:32.708071       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0725 20:24:32.708219       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-20220725131741-44543_d2512faa-681b-4a42-bd10-28354c4a4537!
	I0725 20:24:32.708978       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a74a2ed0-d980-424c-9fcd-210562437f90", APIVersion:"v1", ResourceVersion:"487", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-20220725131741-44543_d2512faa-681b-4a42-bd10-28354c4a4537 became leader
	I0725 20:24:32.809329       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-20220725131741-44543_d2512faa-681b-4a42-bd10-28354c4a4537!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220725131741-44543 -n no-preload-20220725131741-44543
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-20220725131741-44543 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-5c6f97fb75-wx6t6
helpers_test.go:272: ======> post-mortem[TestStartStop/group/no-preload/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context no-preload-20220725131741-44543 describe pod metrics-server-5c6f97fb75-wx6t6
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context no-preload-20220725131741-44543 describe pod metrics-server-5c6f97fb75-wx6t6: exit status 1 (280.237085ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5c6f97fb75-wx6t6" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context no-preload-20220725131741-44543 describe pod metrics-server-5c6f97fb75-wx6t6: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/Pause (43.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (575.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 13:30:27.393999   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/enable-default-cni-20220725125922-44543/client.crt: no such file or directory
E0725 13:30:29.009909   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/auto-20220725125922-44543/client.crt: no such file or directory
E0725 13:30:29.383452   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/bridge-20220725125922-44543/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 13:30:37.574682   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/no-preload-20220725131741-44543/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 13:31:30.922150   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubenet-20220725125922-44543/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 13:31:36.966035   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kindnet-20220725125922-44543/client.crt: no such file or directory
E0725 13:31:38.205446   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/functional-20220725122408-44543/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 13:31:44.475045   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/addons-20220725121918-44543/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 13:33:07.533074   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/addons-20220725121918-44543/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 13:33:12.674956   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/cilium-20220725125923-44543/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 13:33:56.057128   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/skaffold-20220725125803-44543/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 13:34:10.459399   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/calico-20220725125923-44543/client.crt: no such file or directory
E0725 13:34:15.669557   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/no-preload-20220725131741-44543/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 13:34:20.663933   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/false-20220725125922-44543/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 13:34:35.738022   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/cilium-20220725125923-44543/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 13:34:43.361015   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/no-preload-20220725131741-44543/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 13:35:27.421499   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/enable-default-cni-20220725125922-44543/client.crt: no such file or directory
E0725 13:35:29.037523   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/auto-20220725125922-44543/client.crt: no such file or directory
E0725 13:35:29.410801   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/bridge-20220725125922-44543/client.crt: no such file or directory
E0725 13:35:33.516542   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/calico-20220725125923-44543/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 13:35:43.718540   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/false-20220725125922-44543/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 13:36:30.947930   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubenet-20220725125922-44543/client.crt: no such file or directory
E0725 13:36:36.991122   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kindnet-20220725125922-44543/client.crt: no such file or directory
E0725 13:36:38.229268   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/functional-20220725122408-44543/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 13:36:44.502224   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/addons-20220725121918-44543/client.crt: no such file or directory
E0725 13:36:50.476820   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/enable-default-cni-20220725125922-44543/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 13:36:52.459816   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/bridge-20220725125922-44543/client.crt: no such file or directory
E0725 13:36:59.128495   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/skaffold-20220725125803-44543/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 13:37:54.000152   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubenet-20220725125922-44543/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 13:38:12.700012   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/cilium-20220725125923-44543/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 13:38:56.077854   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/skaffold-20220725125803-44543/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220725131610-44543 -n old-k8s-version-20220725131610-44543
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220725131610-44543 -n old-k8s-version-20220725131610-44543: exit status 2 (443.613127ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-20220725131610-44543" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220725131610-44543
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220725131610-44543:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6935d4927a39f4ec4845798aff773e95bc8ae838958e9958b965359cf4e99f8c",
	        "Created": "2022-07-25T20:16:17.246440867Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 250323,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-25T20:21:54.974897982Z",
	            "FinishedAt": "2022-07-25T20:21:52.153635121Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/6935d4927a39f4ec4845798aff773e95bc8ae838958e9958b965359cf4e99f8c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6935d4927a39f4ec4845798aff773e95bc8ae838958e9958b965359cf4e99f8c/hostname",
	        "HostsPath": "/var/lib/docker/containers/6935d4927a39f4ec4845798aff773e95bc8ae838958e9958b965359cf4e99f8c/hosts",
	        "LogPath": "/var/lib/docker/containers/6935d4927a39f4ec4845798aff773e95bc8ae838958e9958b965359cf4e99f8c/6935d4927a39f4ec4845798aff773e95bc8ae838958e9958b965359cf4e99f8c-json.log",
	        "Name": "/old-k8s-version-20220725131610-44543",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220725131610-44543:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220725131610-44543",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f15eeb93498eb9b562fa0404f6a7af49ce3b2ca697ba1b4c09698c8f0f5924f2-init/diff:/var/lib/docker/overlay2/d54b85ebfe416686b2558e2807bd5eef62d8c3964c35bbc22b4965093a7c57f4/diff:/var/lib/docker/overlay2/5cb3ff6e7921802d23fd6cba2490df2c36857ad3d97eedcf6e537d1527a37f60/diff:/var/lib/docker/overlay2/e9074b1b2294cc5a00d3237646a71c4d30d89748eadbae26dd532d6a475c9851/diff:/var/lib/docker/overlay2/df8929c4859ecf2c8807040b0174a3f0dfa8da01532ea961cd7f4f2f0ccfbfc2/diff:/var/lib/docker/overlay2/177329168f9a1043ce94a2b7b32958779e8f685e85abf8e4c335d03d23295b26/diff:/var/lib/docker/overlay2/9eef19133d2d5e0111523234481658c1f73d7552db59842ad9ca687debda8fb3/diff:/var/lib/docker/overlay2/5c3a4cdf915d1e65e3003feefb9b9e649b432658e093e04971d5cb79e610dba9/diff:/var/lib/docker/overlay2/d1aeff9fecf983db994270b4acf2a2310c790c64d42c58d436aec3238f88b36c/diff:/var/lib/docker/overlay2/580dc19745fb3c1352983857ac5a555954893045702e7867e651e15a2d6d8732/diff:/var/lib/docker/overlay2/6c77b3
2028ad3ef74c2faf89b5080529c0391907ee9c1d7bbc00c25a5340238b/diff:/var/lib/docker/overlay2/e9e20374d05015c7a4f19eb8e14e663122cd2aef66d761bedfdef138551230b8/diff:/var/lib/docker/overlay2/cd6d64933d189089a7aaf08d7c7db50b89cc981d03b8272e4fdedb65495c3e4c/diff:/var/lib/docker/overlay2/768244a14ae888bb2a08747182035c24bd2a6a7a67f1ddae7b4e60efccb01339/diff:/var/lib/docker/overlay2/f90a95060678580a1a90ee9a563996249f7606bffcd06b680ef15234b56ab4a6/diff:/var/lib/docker/overlay2/49310159b9be5ac9bfa1dca18d816535ceb12e228963adafcf71f9d7d52d95ec/diff:/var/lib/docker/overlay2/cef5c40c08cddf01fe5ea252f4a28586c0bd1296ddc38fc37fc7e34af83c3c30/diff:/var/lib/docker/overlay2/b31bdcb9b1a69c5230400a4e1f5650c1cc8f9e27e673ff43708f9a450106733e/diff:/var/lib/docker/overlay2/899516301b3ffc21aee7cb9984f97d944fe5af0f2cd0bc5a5895bf187e4bff72/diff:/var/lib/docker/overlay2/f1bd0d2fc2ce458c0b6364dde56610c3d26bf8dc2cca2ae4e34a46f173f44595/diff:/var/lib/docker/overlay2/9f0da14f9c19d5a719554996b34f3ef80a0378921181aaf89ca41339ccc5bee3/diff:/var/lib/d
ocker/overlay2/4d6e8c40f7c65cbcfa827b62273eb37d2a0bb0407ee161b80523f11c157a52d4/diff:/var/lib/docker/overlay2/434355bebe182fb5c97b07ad8cf7a59b5474f8bc71379ff4f9690ffe31837e63/diff:/var/lib/docker/overlay2/128fb199261eb4fea9e79111861223ddb0d84438454cb7c0b9a61cc73d0796c3/diff:/var/lib/docker/overlay2/11c9cb7e5f15ac43115cb739da1135a53e08dc94ee1da27441e1d6c79eb73e62/diff:/var/lib/docker/overlay2/5e6bf225209b025da3db5a2d8862c3a0ee64417fa809d026e010644fc6619b48/diff:/var/lib/docker/overlay2/265a2d12f86abb8a607a06ec6f225186dfe622f7e23cbbf146a2ceffc7bc9bdc/diff:/var/lib/docker/overlay2/e9140b8aea381db0faf833cd2d12ff40267febe63f7c6bc2fa27384dbadf89bc/diff:/var/lib/docker/overlay2/07d7d0ab38cb08c95279e0706a65a6a4aeff8b5e72d559f8bc3dfcc0895654d6/diff:/var/lib/docker/overlay2/3a3d8346fb066863283ad39cf7f43615d7a1e99dc12aa7e8892d85d1c550bc63/diff:/var/lib/docker/overlay2/e5317b212815c832ebd2451ddd6e1f6922f350c5c1651ccfef7c5a8f34b7c1e3/diff:/var/lib/docker/overlay2/3bd7d98f0f0bf7ad2e29344a6ab4ca3211616e390da35077e76565c2074
df852/diff:/var/lib/docker/overlay2/ff63d2045cce1bb51b201079bba9d2b2a666b7331699b9407801b2fc00792bb3/diff:/var/lib/docker/overlay2/545bdf3f77f20e3db68875eda0a8829facb605119a630956ab79323688a3562a/diff:/var/lib/docker/overlay2/8f9b40124ec5ada59184358938542e028ca1e234ef1f6bce1816becf6ec8354e/diff:/var/lib/docker/overlay2/71b919cd2ece4207294cb577adab54fdba87cd9f7fdca1a53d6f376f87f11151/diff:/var/lib/docker/overlay2/b2d4a34fe28bd395d9b4ba0120173fc9df90c36a8a60a309ec088eca8930768c/diff:/var/lib/docker/overlay2/e986f180cbc34391c1ea6665283859b4c3c3061156ea1ff7196ffd869741e591/diff:/var/lib/docker/overlay2/cb98df203d42ea558f11fe84300b687cb65e6f3401b377a4466c9c8ef276ec59/diff:/var/lib/docker/overlay2/6371f9c0528fb1fb4a17bfcf3a34877fdd6d43dca1b5f06aff998d43d2010c46/diff:/var/lib/docker/overlay2/79ceef8e89c4094715c05bafb8c1427d09fb4a99f71f1a6186299303af352c98/diff:/var/lib/docker/overlay2/e034e76eeaff4e68d5158ffe54b55d122124ad82998be242fa08bec3010a0e64/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f15eeb93498eb9b562fa0404f6a7af49ce3b2ca697ba1b4c09698c8f0f5924f2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f15eeb93498eb9b562fa0404f6a7af49ce3b2ca697ba1b4c09698c8f0f5924f2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f15eeb93498eb9b562fa0404f6a7af49ce3b2ca697ba1b4c09698c8f0f5924f2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220725131610-44543",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220725131610-44543/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220725131610-44543",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220725131610-44543",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220725131610-44543",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d84a1a595955080b294e46d4c0e514ca16b44447ef22b822c1bc5aa4576d787b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58933"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58934"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58935"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58936"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58937"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d84a1a595955",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220725131610-44543": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6935d4927a39",
	                        "old-k8s-version-20220725131610-44543"
	                    ],
	                    "NetworkID": "c2f2901f9a0d93fa66499c6332491a576318c2a7c67d4d75046d6eea022d9aab",
	                    "EndpointID": "43cf55334515d40188d52abea75fa535d217d7aa8b4c915012814925b60fae46",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220725131610-44543 -n old-k8s-version-20220725131610-44543
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220725131610-44543 -n old-k8s-version-20220725131610-44543: exit status 2 (447.199882ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-20220725131610-44543 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-20220725131610-44543 logs -n 25: (3.531646414s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |                     Profile                     |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p                               | old-k8s-version-20220725131610-44543            | jenkins | v1.26.0 | 25 Jul 22 13:21 PDT | 25 Jul 22 13:21 PDT |
	|         | old-k8s-version-20220725131610-44543              |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                     |                     |
	| start   | -p                                                | old-k8s-version-20220725131610-44543            | jenkins | v1.26.0 | 25 Jul 22 13:21 PDT |                     |
	|         | old-k8s-version-20220725131610-44543              |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                                 |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                 |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                                 |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                                 |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                                 |         |         |                     |                     |
	| ssh     | -p                                                | no-preload-20220725131741-44543                 | jenkins | v1.26.0 | 25 Jul 22 13:24 PDT | 25 Jul 22 13:24 PDT |
	|         | no-preload-20220725131741-44543                   |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                                 |         |         |                     |                     |
	| pause   | -p                                                | no-preload-20220725131741-44543                 | jenkins | v1.26.0 | 25 Jul 22 13:24 PDT | 25 Jul 22 13:24 PDT |
	|         | no-preload-20220725131741-44543                   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| unpause | -p                                                | no-preload-20220725131741-44543                 | jenkins | v1.26.0 | 25 Jul 22 13:25 PDT | 25 Jul 22 13:25 PDT |
	|         | no-preload-20220725131741-44543                   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                | no-preload-20220725131741-44543                 | jenkins | v1.26.0 | 25 Jul 22 13:25 PDT | 25 Jul 22 13:25 PDT |
	|         | no-preload-20220725131741-44543                   |                                                 |         |         |                     |                     |
	| delete  | -p                                                | no-preload-20220725131741-44543                 | jenkins | v1.26.0 | 25 Jul 22 13:25 PDT | 25 Jul 22 13:25 PDT |
	|         | no-preload-20220725131741-44543                   |                                                 |         |         |                     |                     |
	| start   | -p                                                | embed-certs-20220725132539-44543                | jenkins | v1.26.0 | 25 Jul 22 13:25 PDT | 25 Jul 22 13:26 PDT |
	|         | embed-certs-20220725132539-44543                  |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                                 |         |         |                     |                     |
	|         | --driver=docker                                   |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | embed-certs-20220725132539-44543                | jenkins | v1.26.0 | 25 Jul 22 13:26 PDT | 25 Jul 22 13:26 PDT |
	|         | embed-certs-20220725132539-44543                  |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                     |                     |
	| stop    | -p                                                | embed-certs-20220725132539-44543                | jenkins | v1.26.0 | 25 Jul 22 13:26 PDT | 25 Jul 22 13:26 PDT |
	|         | embed-certs-20220725132539-44543                  |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                               | embed-certs-20220725132539-44543                | jenkins | v1.26.0 | 25 Jul 22 13:26 PDT | 25 Jul 22 13:26 PDT |
	|         | embed-certs-20220725132539-44543                  |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                     |                     |
	| start   | -p                                                | embed-certs-20220725132539-44543                | jenkins | v1.26.0 | 25 Jul 22 13:26 PDT | 25 Jul 22 13:31 PDT |
	|         | embed-certs-20220725132539-44543                  |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                                 |         |         |                     |                     |
	|         | --driver=docker                                   |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |                                                 |         |         |                     |                     |
	| ssh     | -p                                                | embed-certs-20220725132539-44543                | jenkins | v1.26.0 | 25 Jul 22 13:32 PDT | 25 Jul 22 13:32 PDT |
	|         | embed-certs-20220725132539-44543                  |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                                 |         |         |                     |                     |
	| pause   | -p                                                | embed-certs-20220725132539-44543                | jenkins | v1.26.0 | 25 Jul 22 13:32 PDT | 25 Jul 22 13:32 PDT |
	|         | embed-certs-20220725132539-44543                  |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| unpause | -p                                                | embed-certs-20220725132539-44543                | jenkins | v1.26.0 | 25 Jul 22 13:32 PDT | 25 Jul 22 13:32 PDT |
	|         | embed-certs-20220725132539-44543                  |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                | embed-certs-20220725132539-44543                | jenkins | v1.26.0 | 25 Jul 22 13:32 PDT | 25 Jul 22 13:32 PDT |
	|         | embed-certs-20220725132539-44543                  |                                                 |         |         |                     |                     |
	| delete  | -p                                                | embed-certs-20220725132539-44543                | jenkins | v1.26.0 | 25 Jul 22 13:32 PDT | 25 Jul 22 13:32 PDT |
	|         | embed-certs-20220725132539-44543                  |                                                 |         |         |                     |                     |
	| delete  | -p                                                | disable-driver-mounts-20220725133257-44543      | jenkins | v1.26.0 | 25 Jul 22 13:32 PDT | 25 Jul 22 13:32 PDT |
	|         | disable-driver-mounts-20220725133257-44543        |                                                 |         |         |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:32 PDT | 25 Jul 22 13:33 PDT |
	|         | default-k8s-different-port-20220725133258-44543   |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:33 PDT | 25 Jul 22 13:33 PDT |
	|         | default-k8s-different-port-20220725133258-44543   |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                     |                     |
	| stop    | -p                                                | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:33 PDT | 25 Jul 22 13:34 PDT |
	|         | default-k8s-different-port-20220725133258-44543   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:34 PDT | 25 Jul 22 13:34 PDT |
	|         | default-k8s-different-port-20220725133258-44543   |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:34 PDT | 25 Jul 22 13:39 PDT |
	|         | default-k8s-different-port-20220725133258-44543   |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |                                                 |         |         |                     |                     |
	| ssh     | -p                                                | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:39 PDT | 25 Jul 22 13:39 PDT |
	|         | default-k8s-different-port-20220725133258-44543   |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                                 |         |         |                     |                     |
	| pause   | -p                                                | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:39 PDT | 25 Jul 22 13:39 PDT |
	|         | default-k8s-different-port-20220725133258-44543   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/25 13:34:04
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 13:34:04.627172   61786 out.go:296] Setting OutFile to fd 1 ...
	I0725 13:34:04.627387   61786 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 13:34:04.627392   61786 out.go:309] Setting ErrFile to fd 2...
	I0725 13:34:04.627399   61786 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 13:34:04.627522   61786 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/bin
	I0725 13:34:04.628000   61786 out.go:303] Setting JSON to false
	I0725 13:34:04.642819   61786 start.go:115] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":16416,"bootTime":1658764828,"procs":355,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0725 13:34:04.642925   61786 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0725 13:34:04.664640   61786 out.go:177] * [default-k8s-different-port-20220725133258-44543] minikube v1.26.0 on Darwin 12.4
	I0725 13:34:04.706811   61786 notify.go:193] Checking for updates...
	I0725 13:34:04.728632   61786 out.go:177]   - MINIKUBE_LOCATION=14555
	I0725 13:34:04.750439   61786 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	I0725 13:34:04.771702   61786 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 13:34:04.793905   61786 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 13:34:04.815725   61786 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube
	I0725 13:34:04.838399   61786 config.go:178] Loaded profile config "default-k8s-different-port-20220725133258-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0725 13:34:04.839023   61786 driver.go:365] Setting default libvirt URI to qemu:///system
	I0725 13:34:04.910460   61786 docker.go:137] docker version: linux-20.10.17
	I0725 13:34:04.910592   61786 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 13:34:05.043298   61786 info.go:265] docker info: {ID:DEPZ:X5EQ:JUVI:7TAY:IXPP:M3QT:KGJK:NK3N:YSWM:HAHS:YLUF:RBE6 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 20:34:04.96917702 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 13:34:05.086988   61786 out.go:177] * Using the docker driver based on existing profile
	I0725 13:34:05.107975   61786 start.go:284] selected driver: docker
	I0725 13:34:05.108005   61786 start.go:808] validating driver "docker" against &{Name:default-k8s-different-port-20220725133258-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port
-20220725133258-44543 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:tru
e] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 13:34:05.108159   61786 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 13:34:05.111649   61786 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 13:34:05.244365   61786 info.go:265] docker info: {ID:DEPZ:X5EQ:JUVI:7TAY:IXPP:M3QT:KGJK:NK3N:YSWM:HAHS:YLUF:RBE6 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 20:34:05.170585413 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 13:34:05.244504   61786 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 13:34:05.244520   61786 cni.go:95] Creating CNI manager for ""
	I0725 13:34:05.244529   61786 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 13:34:05.244542   61786 start_flags.go:310] config:
	{Name:default-k8s-different-port-20220725133258-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port-20220725133258-44543 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Netw
ork: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 13:34:05.286913   61786 out.go:177] * Starting control plane node default-k8s-different-port-20220725133258-44543 in cluster default-k8s-different-port-20220725133258-44543
	I0725 13:34:05.308135   61786 cache.go:120] Beginning downloading kic base image for docker with docker
	I0725 13:34:05.329129   61786 out.go:177] * Pulling base image ...
	I0725 13:34:05.350052   61786 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0725 13:34:05.350055   61786 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0725 13:34:05.350147   61786 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4
	I0725 13:34:05.350159   61786 cache.go:57] Caching tarball of preloaded images
	I0725 13:34:05.350324   61786 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0725 13:34:05.350349   61786 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.2 on docker
	I0725 13:34:05.351198   61786 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/default-k8s-different-port-20220725133258-44543/config.json ...
	I0725 13:34:05.414334   61786 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0725 13:34:05.414359   61786 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0725 13:34:05.414371   61786 cache.go:208] Successfully downloaded all kic artifacts
	I0725 13:34:05.414420   61786 start.go:370] acquiring machines lock for default-k8s-different-port-20220725133258-44543: {Name:mk82259bc75cbca30138642157acc7c9a727ddb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 13:34:05.414495   61786 start.go:374] acquired machines lock for "default-k8s-different-port-20220725133258-44543" in 57.072µs
	I0725 13:34:05.414516   61786 start.go:95] Skipping create...Using existing machine configuration
	I0725 13:34:05.414526   61786 fix.go:55] fixHost starting: 
	I0725 13:34:05.414780   61786 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220725133258-44543 --format={{.State.Status}}
	I0725 13:34:05.481920   61786 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220725133258-44543: state=Stopped err=<nil>
	W0725 13:34:05.481949   61786 fix.go:129] unexpected machine state, will restart: <nil>
	I0725 13:34:05.504106   61786 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220725133258-44543" ...
	I0725 13:34:05.525512   61786 cli_runner.go:164] Run: docker start default-k8s-different-port-20220725133258-44543
	I0725 13:34:05.876454   61786 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220725133258-44543 --format={{.State.Status}}
	I0725 13:34:05.950069   61786 kic.go:415] container "default-k8s-different-port-20220725133258-44543" state is running.
	I0725 13:34:05.950674   61786 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220725133258-44543
	I0725 13:34:06.030858   61786 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/default-k8s-different-port-20220725133258-44543/config.json ...
	I0725 13:34:06.031375   61786 machine.go:88] provisioning docker machine ...
	I0725 13:34:06.031401   61786 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220725133258-44543"
	I0725 13:34:06.031482   61786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725133258-44543
	I0725 13:34:06.112519   61786 main.go:134] libmachine: Using SSH client type: native
	I0725 13:34:06.112732   61786 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 60201 <nil> <nil>}
	I0725 13:34:06.112746   61786 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220725133258-44543 && echo "default-k8s-different-port-20220725133258-44543" | sudo tee /etc/hostname
	I0725 13:34:06.239955   61786 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220725133258-44543
	
	I0725 13:34:06.240048   61786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725133258-44543
	I0725 13:34:06.314814   61786 main.go:134] libmachine: Using SSH client type: native
	I0725 13:34:06.314971   61786 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 60201 <nil> <nil>}
	I0725 13:34:06.314987   61786 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220725133258-44543' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220725133258-44543/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220725133258-44543' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 13:34:06.435146   61786 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 13:34:06.435164   61786 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube}
	I0725 13:34:06.435185   61786 ubuntu.go:177] setting up certificates
	I0725 13:34:06.435210   61786 provision.go:83] configureAuth start
	I0725 13:34:06.435282   61786 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220725133258-44543
	I0725 13:34:06.510139   61786 provision.go:138] copyHostCerts
	I0725 13:34:06.510295   61786 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem, removing ...
	I0725 13:34:06.510304   61786 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem
	I0725 13:34:06.510390   61786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem (1082 bytes)
	I0725 13:34:06.510624   61786 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem, removing ...
	I0725 13:34:06.510637   61786 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem
	I0725 13:34:06.510694   61786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem (1123 bytes)
	I0725 13:34:06.510842   61786 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem, removing ...
	I0725 13:34:06.510848   61786 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem
	I0725 13:34:06.510906   61786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem (1675 bytes)
	I0725 13:34:06.511027   61786 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220725133258-44543 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220725133258-44543]
	I0725 13:34:06.640290   61786 provision.go:172] copyRemoteCerts
	I0725 13:34:06.640354   61786 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 13:34:06.640397   61786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725133258-44543
	I0725 13:34:06.714183   61786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60201 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/default-k8s-different-port-20220725133258-44543/id_rsa Username:docker}
	I0725 13:34:06.800565   61786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0725 13:34:06.817495   61786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 13:34:06.835492   61786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0725 13:34:06.851531   61786 provision.go:86] duration metric: configureAuth took 416.13686ms
	I0725 13:34:06.851544   61786 ubuntu.go:193] setting minikube options for container-runtime
	I0725 13:34:06.851704   61786 config.go:178] Loaded profile config "default-k8s-different-port-20220725133258-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0725 13:34:06.851763   61786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725133258-44543
	I0725 13:34:06.922644   61786 main.go:134] libmachine: Using SSH client type: native
	I0725 13:34:06.922819   61786 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 60201 <nil> <nil>}
	I0725 13:34:06.922832   61786 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 13:34:07.045838   61786 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0725 13:34:07.045853   61786 ubuntu.go:71] root file system type: overlay
	I0725 13:34:07.046003   61786 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 13:34:07.046082   61786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725133258-44543
	I0725 13:34:07.116918   61786 main.go:134] libmachine: Using SSH client type: native
	I0725 13:34:07.117160   61786 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 60201 <nil> <nil>}
	I0725 13:34:07.117211   61786 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 13:34:07.249188   61786 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 13:34:07.249277   61786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725133258-44543
	I0725 13:34:07.319965   61786 main.go:134] libmachine: Using SSH client type: native
	I0725 13:34:07.320101   61786 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 60201 <nil> <nil>}
	I0725 13:34:07.320113   61786 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 13:34:07.446161   61786 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 13:34:07.446178   61786 machine.go:91] provisioned docker machine in 1.414225697s
	I0725 13:34:07.446188   61786 start.go:307] post-start starting for "default-k8s-different-port-20220725133258-44543" (driver="docker")
	I0725 13:34:07.446194   61786 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 13:34:07.446265   61786 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 13:34:07.446311   61786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725133258-44543
	I0725 13:34:07.517500   61786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60201 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/default-k8s-different-port-20220725133258-44543/id_rsa Username:docker}
	I0725 13:34:07.603110   61786 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 13:34:07.606519   61786 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0725 13:34:07.606534   61786 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0725 13:34:07.606542   61786 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0725 13:34:07.606551   61786 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0725 13:34:07.606561   61786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/addons for local assets ...
	I0725 13:34:07.606663   61786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files for local assets ...
	I0725 13:34:07.606798   61786 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem -> 445432.pem in /etc/ssl/certs
	I0725 13:34:07.606947   61786 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 13:34:07.613740   61786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem --> /etc/ssl/certs/445432.pem (1708 bytes)
	I0725 13:34:07.629891   61786 start.go:310] post-start completed in 183.624484ms
	I0725 13:34:07.629958   61786 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 13:34:07.630015   61786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725133258-44543
	I0725 13:34:07.700658   61786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60201 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/default-k8s-different-port-20220725133258-44543/id_rsa Username:docker}
	I0725 13:34:07.785856   61786 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0725 13:34:07.790469   61786 fix.go:57] fixHost completed within 2.37498005s
	I0725 13:34:07.790481   61786 start.go:82] releasing machines lock for "default-k8s-different-port-20220725133258-44543", held for 2.375014977s
	I0725 13:34:07.790547   61786 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220725133258-44543
	I0725 13:34:07.861116   61786 ssh_runner.go:195] Run: systemctl --version
	I0725 13:34:07.861126   61786 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0725 13:34:07.861183   61786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725133258-44543
	I0725 13:34:07.861199   61786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725133258-44543
	I0725 13:34:07.938182   61786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60201 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/default-k8s-different-port-20220725133258-44543/id_rsa Username:docker}
	I0725 13:34:07.940737   61786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60201 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/default-k8s-different-port-20220725133258-44543/id_rsa Username:docker}
	I0725 13:34:08.241696   61786 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 13:34:08.251517   61786 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0725 13:34:08.251594   61786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 13:34:08.264323   61786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 13:34:08.277213   61786 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 13:34:08.340273   61786 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 13:34:08.415371   61786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 13:34:08.483465   61786 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 13:34:08.713080   61786 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0725 13:34:08.784666   61786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 13:34:08.854074   61786 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0725 13:34:08.863345   61786 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0725 13:34:08.863409   61786 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0725 13:34:08.867141   61786 start.go:471] Will wait 60s for crictl version
	I0725 13:34:08.867182   61786 ssh_runner.go:195] Run: sudo crictl version
	I0725 13:34:08.968151   61786 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0725 13:34:08.968217   61786 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 13:34:09.002469   61786 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 13:34:09.063979   61786 out.go:204] * Preparing Kubernetes v1.24.2 on Docker 20.10.17 ...
	I0725 13:34:09.064085   61786 cli_runner.go:164] Run: docker exec -t default-k8s-different-port-20220725133258-44543 dig +short host.docker.internal
	I0725 13:34:09.191610   61786 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0725 13:34:09.191718   61786 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0725 13:34:09.195961   61786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 13:34:09.205544   61786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220725133258-44543
	I0725 13:34:09.276048   61786 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0725 13:34:09.276131   61786 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 13:34:09.305942   61786 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.2
	k8s.gcr.io/kube-controller-manager:v1.24.2
	k8s.gcr.io/kube-proxy:v1.24.2
	k8s.gcr.io/kube-scheduler:v1.24.2
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0725 13:34:09.305957   61786 docker.go:542] Images already preloaded, skipping extraction
	I0725 13:34:09.306037   61786 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 13:34:09.334786   61786 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.2
	k8s.gcr.io/kube-controller-manager:v1.24.2
	k8s.gcr.io/kube-scheduler:v1.24.2
	k8s.gcr.io/kube-proxy:v1.24.2
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0725 13:34:09.334808   61786 cache_images.go:84] Images are preloaded, skipping loading
	I0725 13:34:09.334878   61786 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0725 13:34:09.407682   61786 cni.go:95] Creating CNI manager for ""
	I0725 13:34:09.407694   61786 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 13:34:09.407709   61786 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0725 13:34:09.407726   61786 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220725133258-44543 NodeName:default-k8s-different-port-20220725133258-44543 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 Cgr
oupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0725 13:34:09.407863   61786 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "default-k8s-different-port-20220725133258-44543"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 13:34:09.407969   61786 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=default-k8s-different-port-20220725133258-44543 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port-20220725133258-44543 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0725 13:34:09.408026   61786 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
	I0725 13:34:09.415906   61786 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 13:34:09.415950   61786 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 13:34:09.422916   61786 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (509 bytes)
	I0725 13:34:09.435999   61786 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 13:34:09.447804   61786 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2069 bytes)
	I0725 13:34:09.459767   61786 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0725 13:34:09.463350   61786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 13:34:09.472214   61786 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/default-k8s-different-port-20220725133258-44543 for IP: 192.168.76.2
	I0725 13:34:09.472328   61786 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.key
	I0725 13:34:09.472377   61786 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.key
	I0725 13:34:09.472455   61786 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/default-k8s-different-port-20220725133258-44543/client.key
	I0725 13:34:09.472518   61786 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/default-k8s-different-port-20220725133258-44543/apiserver.key.31bdca25
	I0725 13:34:09.472571   61786 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/default-k8s-different-port-20220725133258-44543/proxy-client.key
	I0725 13:34:09.472770   61786 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543.pem (1338 bytes)
	W0725 13:34:09.472821   61786 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543_empty.pem, impossibly tiny 0 bytes
	I0725 13:34:09.472840   61786 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 13:34:09.472875   61786 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem (1082 bytes)
	I0725 13:34:09.472906   61786 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem (1123 bytes)
	I0725 13:34:09.472936   61786 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem (1675 bytes)
	I0725 13:34:09.473004   61786 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem (1708 bytes)
	I0725 13:34:09.473565   61786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/default-k8s-different-port-20220725133258-44543/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0725 13:34:09.490187   61786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/default-k8s-different-port-20220725133258-44543/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 13:34:09.506643   61786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/default-k8s-different-port-20220725133258-44543/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 13:34:09.523366   61786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/default-k8s-different-port-20220725133258-44543/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0725 13:34:09.539862   61786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 13:34:09.556235   61786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 13:34:09.572084   61786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 13:34:09.588997   61786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0725 13:34:09.605403   61786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem --> /usr/share/ca-certificates/445432.pem (1708 bytes)
	I0725 13:34:09.622071   61786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 13:34:09.639455   61786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543.pem --> /usr/share/ca-certificates/44543.pem (1338 bytes)
	I0725 13:34:09.666648   61786 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 13:34:09.680404   61786 ssh_runner.go:195] Run: openssl version
	I0725 13:34:09.685377   61786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/445432.pem && ln -fs /usr/share/ca-certificates/445432.pem /etc/ssl/certs/445432.pem"
	I0725 13:34:09.692933   61786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/445432.pem
	I0725 13:34:09.696819   61786 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 25 19:24 /usr/share/ca-certificates/445432.pem
	I0725 13:34:09.696867   61786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/445432.pem
	I0725 13:34:09.701960   61786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/445432.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 13:34:09.709308   61786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 13:34:09.717057   61786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 13:34:09.721219   61786 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 25 19:19 /usr/share/ca-certificates/minikubeCA.pem
	I0725 13:34:09.721287   61786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 13:34:09.726658   61786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 13:34:09.733604   61786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44543.pem && ln -fs /usr/share/ca-certificates/44543.pem /etc/ssl/certs/44543.pem"
	I0725 13:34:09.741720   61786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44543.pem
	I0725 13:34:09.745497   61786 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 25 19:24 /usr/share/ca-certificates/44543.pem
	I0725 13:34:09.745548   61786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44543.pem
	I0725 13:34:09.751361   61786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/44543.pem /etc/ssl/certs/51391683.0"
	I0725 13:34:09.758844   61786 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220725133258-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port-20220725133258-4454
3 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 13:34:09.758948   61786 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 13:34:09.788556   61786 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 13:34:09.796138   61786 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0725 13:34:09.796155   61786 kubeadm.go:626] restartCluster start
	I0725 13:34:09.796211   61786 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 13:34:09.803427   61786 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:09.803495   61786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220725133258-44543
	I0725 13:34:09.877185   61786 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220725133258-44543" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	I0725 13:34:09.877366   61786 kubeconfig.go:127] "default-k8s-different-port-20220725133258-44543" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig - will repair!
	I0725 13:34:09.877706   61786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig: {Name:mk33e723b045d7cbb2c524ed31a7e78a4a0b6415 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 13:34:09.878802   61786 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 13:34:09.886342   61786 api_server.go:165] Checking apiserver status ...
	I0725 13:34:09.886396   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:34:09.894462   61786 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:10.094989   61786 api_server.go:165] Checking apiserver status ...
	I0725 13:34:10.095125   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:34:10.104812   61786 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:10.296311   61786 api_server.go:165] Checking apiserver status ...
	I0725 13:34:10.296403   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:34:10.306824   61786 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:10.494856   61786 api_server.go:165] Checking apiserver status ...
	I0725 13:34:10.494967   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:34:10.505102   61786 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:10.696865   61786 api_server.go:165] Checking apiserver status ...
	I0725 13:34:10.697038   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:34:10.707693   61786 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:10.896785   61786 api_server.go:165] Checking apiserver status ...
	I0725 13:34:10.896969   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:34:10.907495   61786 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:11.097072   61786 api_server.go:165] Checking apiserver status ...
	I0725 13:34:11.097166   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:34:11.107646   61786 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:11.294983   61786 api_server.go:165] Checking apiserver status ...
	I0725 13:34:11.295100   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:34:11.304071   61786 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:11.496628   61786 api_server.go:165] Checking apiserver status ...
	I0725 13:34:11.496802   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:34:11.507122   61786 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:11.697167   61786 api_server.go:165] Checking apiserver status ...
	I0725 13:34:11.697382   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:34:11.708140   61786 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:11.896909   61786 api_server.go:165] Checking apiserver status ...
	I0725 13:34:11.897054   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:34:11.907309   61786 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:12.095351   61786 api_server.go:165] Checking apiserver status ...
	I0725 13:34:12.095504   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:34:12.107280   61786 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:12.297402   61786 api_server.go:165] Checking apiserver status ...
	I0725 13:34:12.297559   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:34:12.307933   61786 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:12.497420   61786 api_server.go:165] Checking apiserver status ...
	I0725 13:34:12.497620   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:34:12.509829   61786 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:12.697477   61786 api_server.go:165] Checking apiserver status ...
	I0725 13:34:12.697599   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:34:12.708129   61786 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:12.897571   61786 api_server.go:165] Checking apiserver status ...
	I0725 13:34:12.897712   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:34:12.908504   61786 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:12.908514   61786 api_server.go:165] Checking apiserver status ...
	I0725 13:34:12.908558   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:34:12.916432   61786 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:12.916446   61786 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0725 13:34:12.916453   61786 kubeadm.go:1092] stopping kube-system containers ...
	I0725 13:34:12.916512   61786 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 13:34:12.948692   61786 docker.go:443] Stopping containers: [21fb696c3038 51677fb6144c b476d4c9ea34 61aef9880797 d95939ee82e4 142e60195cf4 1a100f122ea6 381dfaca547b abf2f82e0e50 e65639a75c81 11bcb130ff7a d0a27f48f794 449a4cccfc67 5548f957dbdf 09e5b2b95ce2]
	I0725 13:34:12.948776   61786 ssh_runner.go:195] Run: docker stop 21fb696c3038 51677fb6144c b476d4c9ea34 61aef9880797 d95939ee82e4 142e60195cf4 1a100f122ea6 381dfaca547b abf2f82e0e50 e65639a75c81 11bcb130ff7a d0a27f48f794 449a4cccfc67 5548f957dbdf 09e5b2b95ce2
	I0725 13:34:12.979483   61786 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 13:34:12.989370   61786 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 13:34:12.996679   61786 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jul 25 20:33 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jul 25 20:33 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2127 Jul 25 20:33 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jul 25 20:33 /etc/kubernetes/scheduler.conf
	
	I0725 13:34:12.996731   61786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0725 13:34:13.003759   61786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0725 13:34:13.011125   61786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0725 13:34:13.018455   61786 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:13.018511   61786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 13:34:13.025510   61786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0725 13:34:13.033159   61786 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:13.033202   61786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 13:34:13.040388   61786 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 13:34:13.048073   61786 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0725 13:34:13.048082   61786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:34:13.093387   61786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:34:14.153710   61786 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.060020888s)
	I0725 13:34:14.153730   61786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:34:14.329681   61786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:34:14.375596   61786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:34:14.426060   61786 api_server.go:51] waiting for apiserver process to appear ...
	I0725 13:34:14.426130   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:34:14.937020   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:34:15.437072   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:34:15.498123   61786 api_server.go:71] duration metric: took 1.071801664s to wait for apiserver process to appear ...
	I0725 13:34:15.498156   61786 api_server.go:87] waiting for apiserver healthz status ...
	I0725 13:34:15.498176   61786 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60205/healthz ...
	I0725 13:34:15.499467   61786 api_server.go:256] stopped: https://127.0.0.1:60205/healthz: Get "https://127.0.0.1:60205/healthz": EOF
	I0725 13:34:16.000075   61786 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60205/healthz ...
	I0725 13:34:19.004558   61786 api_server.go:266] https://127.0.0.1:60205/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 13:34:19.004576   61786 api_server.go:102] status: https://127.0.0.1:60205/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 13:34:19.500619   61786 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60205/healthz ...
	I0725 13:34:19.507069   61786 api_server.go:266] https://127.0.0.1:60205/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 13:34:19.507082   61786 api_server.go:102] status: https://127.0.0.1:60205/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 13:34:20.000615   61786 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60205/healthz ...
	I0725 13:34:20.006253   61786 api_server.go:266] https://127.0.0.1:60205/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 13:34:20.006267   61786 api_server.go:102] status: https://127.0.0.1:60205/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 13:34:20.500871   61786 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60205/healthz ...
	I0725 13:34:20.506841   61786 api_server.go:266] https://127.0.0.1:60205/healthz returned 200:
	ok
	I0725 13:34:20.513394   61786 api_server.go:140] control plane version: v1.24.2
	I0725 13:34:20.513410   61786 api_server.go:130] duration metric: took 5.014188979s to wait for apiserver health ...
	I0725 13:34:20.513416   61786 cni.go:95] Creating CNI manager for ""
	I0725 13:34:20.513426   61786 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 13:34:20.513437   61786 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 13:34:20.524394   61786 system_pods.go:59] 8 kube-system pods found
	I0725 13:34:20.524410   61786 system_pods.go:61] "coredns-6d4b75cb6d-ltpwj" [43fe43ee-d181-4a21-936f-c588e810d1b8] Running
	I0725 13:34:20.524414   61786 system_pods.go:61] "etcd-default-k8s-different-port-20220725133258-44543" [e409d4c7-e1f8-4825-b013-df9d0e6680d1] Running
	I0725 13:34:20.524422   61786 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220725133258-44543" [e373ecf2-4fb2-436f-b520-e05c162005e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0725 13:34:20.524429   61786 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220725133258-44543" [2203416f-f18e-4c6c-bf8f-62fe42f5d716] Running
	I0725 13:34:20.524433   61786 system_pods.go:61] "kube-proxy-bsbv8" [00380a03-69be-4582-bc91-be2e992a8756] Running
	I0725 13:34:20.524439   61786 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220725133258-44543" [ee2345ff-7e0e-4e32-a303-ec8637f9a6e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 13:34:20.524446   61786 system_pods.go:61] "metrics-server-5c6f97fb75-dt6cw" [5f26aec3-73de-457a-ab6e-6b8db807386c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 13:34:20.524451   61786 system_pods.go:61] "storage-provisioner" [872443cb-9c58-4914-bfd8-9c919c4c2729] Running
	I0725 13:34:20.524454   61786 system_pods.go:74] duration metric: took 11.01124ms to wait for pod list to return data ...
	I0725 13:34:20.524461   61786 node_conditions.go:102] verifying NodePressure condition ...
	I0725 13:34:20.528607   61786 node_conditions.go:122] node storage ephemeral capacity is 107077304Ki
	I0725 13:34:20.528628   61786 node_conditions.go:123] node cpu capacity is 6
	I0725 13:34:20.528639   61786 node_conditions.go:105] duration metric: took 4.173368ms to run NodePressure ...
	I0725 13:34:20.528651   61786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:34:20.692164   61786 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0725 13:34:20.699554   61786 kubeadm.go:777] kubelet initialised
	I0725 13:34:20.699567   61786 kubeadm.go:778] duration metric: took 7.38462ms waiting for restarted kubelet to initialise ...
	I0725 13:34:20.699575   61786 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 13:34:20.706455   61786 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-ltpwj" in "kube-system" namespace to be "Ready" ...
	I0725 13:34:20.713491   61786 pod_ready.go:92] pod "coredns-6d4b75cb6d-ltpwj" in "kube-system" namespace has status "Ready":"True"
	I0725 13:34:20.713502   61786 pod_ready.go:81] duration metric: took 7.031927ms waiting for pod "coredns-6d4b75cb6d-ltpwj" in "kube-system" namespace to be "Ready" ...
	I0725 13:34:20.713509   61786 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:34:20.720234   61786 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:34:20.720245   61786 pod_ready.go:81] duration metric: took 6.729713ms waiting for pod "etcd-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:34:20.720266   61786 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:34:22.736135   61786 pod_ready.go:102] pod "kube-apiserver-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace has status "Ready":"False"
	I0725 13:34:24.737145   61786 pod_ready.go:102] pod "kube-apiserver-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace has status "Ready":"False"
	I0725 13:34:26.739266   61786 pod_ready.go:102] pod "kube-apiserver-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace has status "Ready":"False"
	I0725 13:34:28.739639   61786 pod_ready.go:102] pod "kube-apiserver-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace has status "Ready":"False"
	I0725 13:34:30.237640   61786 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:34:30.237653   61786 pod_ready.go:81] duration metric: took 9.516001331s waiting for pod "kube-apiserver-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:34:30.237660   61786 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:34:30.747406   61786 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:34:30.747419   61786 pod_ready.go:81] duration metric: took 509.699097ms waiting for pod "kube-controller-manager-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:34:30.747427   61786 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bsbv8" in "kube-system" namespace to be "Ready" ...
	I0725 13:34:30.752023   61786 pod_ready.go:92] pod "kube-proxy-bsbv8" in "kube-system" namespace has status "Ready":"True"
	I0725 13:34:30.752032   61786 pod_ready.go:81] duration metric: took 4.600741ms waiting for pod "kube-proxy-bsbv8" in "kube-system" namespace to be "Ready" ...
	I0725 13:34:30.752038   61786 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:34:30.756171   61786 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:34:30.756179   61786 pod_ready.go:81] duration metric: took 4.135517ms waiting for pod "kube-scheduler-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:34:30.756185   61786 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace to be "Ready" ...
	I0725 13:34:32.769583   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:34:35.266268   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:34:37.269544   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:34:39.767177   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:34:42.270129   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:34:44.770171   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:34:47.266382   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:34:49.270908   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:34:51.766788   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:34:53.770199   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:34:56.267530   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:34:58.270364   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:00.271370   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:02.770491   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:05.267811   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:07.268019   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:09.269175   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:11.771924   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:14.268303   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:16.269490   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:18.269647   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:20.269732   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:22.770167   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:25.272345   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:27.768425   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:29.772730   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:32.269716   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:34.272269   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:36.769112   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:38.770762   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:41.269690   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:43.270881   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:45.770163   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:47.770257   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:49.772467   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:52.271997   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:54.770220   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:56.770972   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:58.772954   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:01.271748   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:03.769893   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:05.771682   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:08.272756   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:10.772210   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:13.269694   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:15.271259   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:17.271758   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:19.771816   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:22.271153   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:24.273500   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:26.771501   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:28.772146   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:30.773207   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:33.272043   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:35.773055   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:38.271505   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:40.771959   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:43.271115   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:45.272040   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:47.272525   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:49.771562   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:52.272147   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:54.273010   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:56.274371   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:58.774235   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:01.274355   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:03.773714   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:05.773848   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:07.774416   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:10.272739   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:12.273147   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:14.774766   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:17.272082   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:19.273723   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:21.275454   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:23.774025   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:25.774734   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:28.275012   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:30.275495   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:32.775435   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:35.273955   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:37.773340   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:40.273624   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:42.275705   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:44.772990   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:46.776013   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:49.275522   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:51.776201   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:54.272799   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:56.276129   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:58.776449   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:38:01.276937   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:38:03.775275   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:38:05.776601   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:38:08.275066   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:38:10.773865   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:38:12.777289   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:38:15.276931   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:38:17.277015   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:38:19.777784   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:38:22.274664   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:38:24.277500   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:38:26.777501   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:38:28.777629   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:38:30.768525   61786 pod_ready.go:81] duration metric: took 4m0.003905943s waiting for pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace to be "Ready" ...
	E0725 13:38:30.768539   61786 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace to be "Ready" (will not retry!)
	I0725 13:38:30.768550   61786 pod_ready.go:38] duration metric: took 4m10.059123063s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 13:38:30.768619   61786 kubeadm.go:630] restartCluster took 4m20.959894497s
	W0725 13:38:30.768693   61786 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0725 13:38:30.768708   61786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0725 13:38:33.097038   61786 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (2.328248172s)
	I0725 13:38:33.097098   61786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 13:38:33.106317   61786 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 13:38:33.113479   61786 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0725 13:38:33.113523   61786 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 13:38:33.120573   61786 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 13:38:33.120592   61786 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0725 13:38:33.404685   61786 out.go:204]   - Generating certificates and keys ...
	I0725 13:38:34.559189   61786 out.go:204]   - Booting up control plane ...
	I0725 13:38:41.609910   61786 out.go:204]   - Configuring RBAC rules ...
	I0725 13:38:41.984727   61786 cni.go:95] Creating CNI manager for ""
	I0725 13:38:41.984743   61786 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 13:38:41.984776   61786 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 13:38:41.984845   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:41.984852   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=a5b59bcfc16aadb787d3d4f0635e06172b98dce6 minikube.k8s.io/name=default-k8s-different-port-20220725133258-44543 minikube.k8s.io/updated_at=2022_07_25T13_38_41_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:41.995147   61786 ops.go:34] apiserver oom_adj: -16
	I0725 13:38:42.131241   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:42.687779   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:43.189737   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:43.689797   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:44.188581   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:44.688143   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:45.189843   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:45.688099   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:46.189813   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:46.689855   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:47.189263   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:47.689377   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:48.188105   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:48.688418   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:49.187882   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:49.689992   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:50.188492   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:50.689222   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:51.190147   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:51.688570   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:52.188695   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:52.688302   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:53.189368   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:53.688182   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:54.188476   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:54.688006   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:54.741570   61786 kubeadm.go:1045] duration metric: took 12.756406696s to wait for elevateKubeSystemPrivileges.
	I0725 13:38:54.741586   61786 kubeadm.go:397] StartCluster complete in 4m44.96945209s
	I0725 13:38:54.741601   61786 settings.go:142] acquiring lock: {Name:mk9b5e011e5806157f9a122f8c65a6cc16a3d9e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 13:38:54.741678   61786 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	I0725 13:38:54.742213   61786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig: {Name:mk33e723b045d7cbb2c524ed31a7e78a4a0b6415 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 13:38:55.258629   61786 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220725133258-44543" rescaled to 1
	I0725 13:38:55.258668   61786 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 13:38:55.258680   61786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0725 13:38:55.258695   61786 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0725 13:38:55.282860   61786 out.go:177] * Verifying Kubernetes components...
	I0725 13:38:55.258826   61786 config.go:178] Loaded profile config "default-k8s-different-port-20220725133258-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0725 13:38:55.282931   61786 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220725133258-44543"
	I0725 13:38:55.282932   61786 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220725133258-44543"
	I0725 13:38:55.282939   61786 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220725133258-44543"
	I0725 13:38:55.282940   61786 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220725133258-44543"
	I0725 13:38:55.314162   61786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0725 13:38:55.355956   61786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 13:38:55.355963   61786 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220725133258-44543"
	I0725 13:38:55.355964   61786 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220725133258-44543"
	I0725 13:38:55.355964   61786 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220725133258-44543"
	I0725 13:38:55.355960   61786 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220725133258-44543"
	W0725 13:38:55.355985   61786 addons.go:162] addon dashboard should already be in state true
	W0725 13:38:55.355980   61786 addons.go:162] addon storage-provisioner should already be in state true
	W0725 13:38:55.355972   61786 addons.go:162] addon metrics-server should already be in state true
	I0725 13:38:55.356030   61786 host.go:66] Checking if "default-k8s-different-port-20220725133258-44543" exists ...
	I0725 13:38:55.356035   61786 host.go:66] Checking if "default-k8s-different-port-20220725133258-44543" exists ...
	I0725 13:38:55.356040   61786 host.go:66] Checking if "default-k8s-different-port-20220725133258-44543" exists ...
	I0725 13:38:55.356345   61786 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220725133258-44543 --format={{.State.Status}}
	I0725 13:38:55.356457   61786 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220725133258-44543 --format={{.State.Status}}
	I0725 13:38:55.356516   61786 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220725133258-44543 --format={{.State.Status}}
	I0725 13:38:55.357205   61786 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220725133258-44543 --format={{.State.Status}}
	I0725 13:38:55.377760   61786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220725133258-44543
	I0725 13:38:55.503130   61786 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220725133258-44543"
	I0725 13:38:55.522805   61786 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0725 13:38:55.522844   61786 addons.go:162] addon default-storageclass should already be in state true
	I0725 13:38:55.601767   61786 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0725 13:38:55.544055   61786 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 13:38:55.544114   61786 host.go:66] Checking if "default-k8s-different-port-20220725133258-44543" exists ...
	I0725 13:38:55.580996   61786 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0725 13:38:55.598483   61786 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220725133258-44543" to be "Ready" ...
	I0725 13:38:55.622916   61786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 13:38:55.622930   61786 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0725 13:38:55.622939   61786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0725 13:38:55.623010   61786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725133258-44543
	I0725 13:38:55.623338   61786 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220725133258-44543 --format={{.State.Status}}
	I0725 13:38:55.626513   61786 node_ready.go:49] node "default-k8s-different-port-20220725133258-44543" has status "Ready":"True"
	I0725 13:38:55.680887   61786 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0725 13:38:55.659957   61786 node_ready.go:38] duration metric: took 37.0254ms waiting for node "default-k8s-different-port-20220725133258-44543" to be "Ready" ...
	I0725 13:38:55.660000   61786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725133258-44543
	I0725 13:38:55.718012   61786 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 13:38:55.718127   61786 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0725 13:38:55.718145   61786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0725 13:38:55.718254   61786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725133258-44543
	I0725 13:38:55.729313   61786 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-krc4w" in "kube-system" namespace to be "Ready" ...
	I0725 13:38:55.755052   61786 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 13:38:55.755074   61786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 13:38:55.755201   61786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725133258-44543
	I0725 13:38:55.759149   61786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60201 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/default-k8s-different-port-20220725133258-44543/id_rsa Username:docker}
	I0725 13:38:55.819926   61786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60201 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/default-k8s-different-port-20220725133258-44543/id_rsa Username:docker}
	I0725 13:38:55.824327   61786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60201 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/default-k8s-different-port-20220725133258-44543/id_rsa Username:docker}
	I0725 13:38:55.852585   61786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60201 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/default-k8s-different-port-20220725133258-44543/id_rsa Username:docker}
	I0725 13:38:55.911420   61786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 13:38:55.930541   61786 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0725 13:38:55.930555   61786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0725 13:38:55.947415   61786 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0725 13:38:55.947430   61786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0725 13:38:56.015152   61786 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 13:38:56.015187   61786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0725 13:38:56.024307   61786 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0725 13:38:56.024322   61786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0725 13:38:56.037145   61786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 13:38:56.128060   61786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 13:38:56.208962   61786 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0725 13:38:56.208980   61786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0725 13:38:56.314257   61786 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0725 13:38:56.314275   61786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0725 13:38:56.500909   61786 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0725 13:38:56.500925   61786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0725 13:38:56.505830   61786 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.149864883s)
	I0725 13:38:56.505848   61786 start.go:809] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0725 13:38:56.539806   61786 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0725 13:38:56.539822   61786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0725 13:38:56.630424   61786 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0725 13:38:56.630457   61786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0725 13:38:56.706962   61786 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0725 13:38:56.706979   61786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0725 13:38:56.735501   61786 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0725 13:38:56.735519   61786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0725 13:38:56.740969   61786 pod_ready.go:97] error getting pod "coredns-6d4b75cb6d-krc4w" in "kube-system" namespace (skipping!): pods "coredns-6d4b75cb6d-krc4w" not found
	I0725 13:38:56.740985   61786 pod_ready.go:81] duration metric: took 1.011621188s waiting for pod "coredns-6d4b75cb6d-krc4w" in "kube-system" namespace to be "Ready" ...
	E0725 13:38:56.740999   61786 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-6d4b75cb6d-krc4w" in "kube-system" namespace (skipping!): pods "coredns-6d4b75cb6d-krc4w" not found
	I0725 13:38:56.741009   61786 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-whj7v" in "kube-system" namespace to be "Ready" ...
	I0725 13:38:56.818086   61786 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0725 13:38:56.818101   61786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0725 13:38:56.844768   61786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0725 13:38:56.929491   61786 addons.go:383] Verifying addon metrics-server=true in "default-k8s-different-port-20220725133258-44543"
	I0725 13:38:57.767005   61786 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0725 13:38:57.841616   61786 addons.go:414] enableAddons completed in 2.582848432s
	I0725 13:38:58.756866   61786 pod_ready.go:102] pod "coredns-6d4b75cb6d-whj7v" in "kube-system" namespace has status "Ready":"False"
	I0725 13:39:00.755719   61786 pod_ready.go:92] pod "coredns-6d4b75cb6d-whj7v" in "kube-system" namespace has status "Ready":"True"
	I0725 13:39:00.755735   61786 pod_ready.go:81] duration metric: took 4.014600528s waiting for pod "coredns-6d4b75cb6d-whj7v" in "kube-system" namespace to be "Ready" ...
	I0725 13:39:00.755745   61786 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:39:00.762033   61786 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:39:00.762042   61786 pod_ready.go:81] duration metric: took 6.291089ms waiting for pod "etcd-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:39:00.762049   61786 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:39:00.767591   61786 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:39:00.767601   61786 pod_ready.go:81] duration metric: took 5.547326ms waiting for pod "kube-apiserver-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:39:00.767610   61786 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:39:00.777675   61786 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:39:00.777686   61786 pod_ready.go:81] duration metric: took 10.069146ms waiting for pod "kube-controller-manager-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:39:00.777694   61786 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pdsqs" in "kube-system" namespace to be "Ready" ...
	I0725 13:39:00.783826   61786 pod_ready.go:92] pod "kube-proxy-pdsqs" in "kube-system" namespace has status "Ready":"True"
	I0725 13:39:00.783835   61786 pod_ready.go:81] duration metric: took 6.136533ms waiting for pod "kube-proxy-pdsqs" in "kube-system" namespace to be "Ready" ...
	I0725 13:39:00.783841   61786 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:39:01.152734   61786 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:39:01.152745   61786 pod_ready.go:81] duration metric: took 368.887729ms waiting for pod "kube-scheduler-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:39:01.152751   61786 pod_ready.go:38] duration metric: took 5.434537401s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 13:39:01.152763   61786 api_server.go:51] waiting for apiserver process to appear ...
	I0725 13:39:01.152815   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:39:01.178939   61786 api_server.go:71] duration metric: took 5.920074581s to wait for apiserver process to appear ...
	I0725 13:39:01.178955   61786 api_server.go:87] waiting for apiserver healthz status ...
	I0725 13:39:01.178962   61786 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60205/healthz ...
	I0725 13:39:01.184599   61786 api_server.go:266] https://127.0.0.1:60205/healthz returned 200:
	ok
	I0725 13:39:01.185840   61786 api_server.go:140] control plane version: v1.24.2
	I0725 13:39:01.185848   61786 api_server.go:130] duration metric: took 6.888886ms to wait for apiserver health ...
	I0725 13:39:01.185853   61786 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 13:39:01.356451   61786 system_pods.go:59] 8 kube-system pods found
	I0725 13:39:01.356466   61786 system_pods.go:61] "coredns-6d4b75cb6d-whj7v" [ee95aea1-d131-4524-a2e1-04d0c4da8e20] Running
	I0725 13:39:01.356470   61786 system_pods.go:61] "etcd-default-k8s-different-port-20220725133258-44543" [9971c3fd-8bc1-4799-825c-47d542d172cd] Running
	I0725 13:39:01.356474   61786 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220725133258-44543" [003259d7-d067-4c90-b5bf-34a9c60d430c] Running
	I0725 13:39:01.356477   61786 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220725133258-44543" [c292e2ae-00d3-48c2-8d9a-e06a2301d358] Running
	I0725 13:39:01.356483   61786 system_pods.go:61] "kube-proxy-pdsqs" [ab647055-f1f8-4144-a7a2-1d7a7da1e1cf] Running
	I0725 13:39:01.356496   61786 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220725133258-44543" [2c7cb5cc-4c11-4aee-a9c7-9e657d1b3610] Running
	I0725 13:39:01.356502   61786 system_pods.go:61] "metrics-server-5c6f97fb75-6tbqr" [d56ebb3b-88b9-4c7b-a6fe-1d145d3d62e9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 13:39:01.356511   61786 system_pods.go:61] "storage-provisioner" [a99d3e3f-11b6-4b57-9e40-e684accad53d] Running
	I0725 13:39:01.356515   61786 system_pods.go:74] duration metric: took 170.651524ms to wait for pod list to return data ...
	I0725 13:39:01.356521   61786 default_sa.go:34] waiting for default service account to be created ...
	I0725 13:39:01.553425   61786 default_sa.go:45] found service account: "default"
	I0725 13:39:01.553439   61786 default_sa.go:55] duration metric: took 196.906744ms for default service account to be created ...
	I0725 13:39:01.553446   61786 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 13:39:01.754454   61786 system_pods.go:86] 8 kube-system pods found
	I0725 13:39:01.754469   61786 system_pods.go:89] "coredns-6d4b75cb6d-whj7v" [ee95aea1-d131-4524-a2e1-04d0c4da8e20] Running
	I0725 13:39:01.754473   61786 system_pods.go:89] "etcd-default-k8s-different-port-20220725133258-44543" [9971c3fd-8bc1-4799-825c-47d542d172cd] Running
	I0725 13:39:01.754477   61786 system_pods.go:89] "kube-apiserver-default-k8s-different-port-20220725133258-44543" [003259d7-d067-4c90-b5bf-34a9c60d430c] Running
	I0725 13:39:01.754481   61786 system_pods.go:89] "kube-controller-manager-default-k8s-different-port-20220725133258-44543" [c292e2ae-00d3-48c2-8d9a-e06a2301d358] Running
	I0725 13:39:01.754484   61786 system_pods.go:89] "kube-proxy-pdsqs" [ab647055-f1f8-4144-a7a2-1d7a7da1e1cf] Running
	I0725 13:39:01.754488   61786 system_pods.go:89] "kube-scheduler-default-k8s-different-port-20220725133258-44543" [2c7cb5cc-4c11-4aee-a9c7-9e657d1b3610] Running
	I0725 13:39:01.754496   61786 system_pods.go:89] "metrics-server-5c6f97fb75-6tbqr" [d56ebb3b-88b9-4c7b-a6fe-1d145d3d62e9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 13:39:01.754500   61786 system_pods.go:89] "storage-provisioner" [a99d3e3f-11b6-4b57-9e40-e684accad53d] Running
	I0725 13:39:01.754505   61786 system_pods.go:126] duration metric: took 201.049861ms to wait for k8s-apps to be running ...
	I0725 13:39:01.754512   61786 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 13:39:01.754564   61786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 13:39:01.765642   61786 system_svc.go:56] duration metric: took 11.126618ms WaitForService to wait for kubelet.
	I0725 13:39:01.765659   61786 kubeadm.go:572] duration metric: took 6.506780003s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0725 13:39:01.765680   61786 node_conditions.go:102] verifying NodePressure condition ...
	I0725 13:39:01.952036   61786 node_conditions.go:122] node storage ephemeral capacity is 107077304Ki
	I0725 13:39:01.952050   61786 node_conditions.go:123] node cpu capacity is 6
	I0725 13:39:01.952056   61786 node_conditions.go:105] duration metric: took 186.353687ms to run NodePressure ...
	I0725 13:39:01.952064   61786 start.go:216] waiting for startup goroutines ...
	I0725 13:39:01.984984   61786 start.go:506] kubectl: 1.24.1, cluster: 1.24.2 (minor skew: 0)
	I0725 13:39:02.007662   61786 out.go:177] * Done! kubectl is now configured to use "default-k8s-different-port-20220725133258-44543" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Mon 2022-07-25 20:21:55 UTC, end at Mon 2022-07-25 20:39:37 UTC. --
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[129]: time="2022-07-25T20:21:57.451853324Z" level=info msg="Processing signal 'terminated'"
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[129]: time="2022-07-25T20:21:57.452788005Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[129]: time="2022-07-25T20:21:57.453258112Z" level=info msg="Daemon shutdown complete"
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[129]: time="2022-07-25T20:21:57.453320986Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 systemd[1]: docker.service: Succeeded.
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 systemd[1]: Stopped Docker Application Container Engine.
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 systemd[1]: Starting Docker Application Container Engine...
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[424]: time="2022-07-25T20:21:57.506263841Z" level=info msg="Starting up"
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[424]: time="2022-07-25T20:21:57.508857550Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[424]: time="2022-07-25T20:21:57.508891909Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[424]: time="2022-07-25T20:21:57.508909432Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[424]: time="2022-07-25T20:21:57.508917186Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[424]: time="2022-07-25T20:21:57.509870019Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[424]: time="2022-07-25T20:21:57.509899398Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[424]: time="2022-07-25T20:21:57.509912393Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[424]: time="2022-07-25T20:21:57.509918763Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[424]: time="2022-07-25T20:21:57.513919873Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[424]: time="2022-07-25T20:21:57.517902418Z" level=info msg="Loading containers: start."
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[424]: time="2022-07-25T20:21:57.592180966Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[424]: time="2022-07-25T20:21:57.621348334Z" level=info msg="Loading containers: done."
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[424]: time="2022-07-25T20:21:57.629449532Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[424]: time="2022-07-25T20:21:57.629505415Z" level=info msg="Daemon has completed initialization"
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 systemd[1]: Started Docker Application Container Engine.
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[424]: time="2022-07-25T20:21:57.651604471Z" level=info msg="API listen on [::]:2376"
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[424]: time="2022-07-25T20:21:57.655414726Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2022-07-25T20:39:39Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  20:39:39 up  1:20,  0 users,  load average: 0.74, 0.90, 1.06
	Linux old-k8s-version-20220725131610-44543 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2022-07-25 20:21:55 UTC, end at Mon 2022-07-25 20:39:39 UTC. --
	Jul 25 20:39:38 old-k8s-version-20220725131610-44543 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 25 20:39:38 old-k8s-version-20220725131610-44543 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 930.
	Jul 25 20:39:38 old-k8s-version-20220725131610-44543 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 25 20:39:38 old-k8s-version-20220725131610-44543 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 25 20:39:38 old-k8s-version-20220725131610-44543 kubelet[24432]: I0725 20:39:38.866178   24432 server.go:410] Version: v1.16.0
	Jul 25 20:39:38 old-k8s-version-20220725131610-44543 kubelet[24432]: I0725 20:39:38.866418   24432 plugins.go:100] No cloud provider specified.
	Jul 25 20:39:38 old-k8s-version-20220725131610-44543 kubelet[24432]: I0725 20:39:38.866428   24432 server.go:773] Client rotation is on, will bootstrap in background
	Jul 25 20:39:38 old-k8s-version-20220725131610-44543 kubelet[24432]: I0725 20:39:38.868286   24432 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 25 20:39:38 old-k8s-version-20220725131610-44543 kubelet[24432]: W0725 20:39:38.868983   24432 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jul 25 20:39:38 old-k8s-version-20220725131610-44543 kubelet[24432]: W0725 20:39:38.869048   24432 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jul 25 20:39:38 old-k8s-version-20220725131610-44543 kubelet[24432]: F0725 20:39:38.869074   24432 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jul 25 20:39:38 old-k8s-version-20220725131610-44543 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 25 20:39:38 old-k8s-version-20220725131610-44543 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 25 20:39:39 old-k8s-version-20220725131610-44543 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 931.
	Jul 25 20:39:39 old-k8s-version-20220725131610-44543 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 25 20:39:39 old-k8s-version-20220725131610-44543 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 25 20:39:39 old-k8s-version-20220725131610-44543 kubelet[24473]: I0725 20:39:39.613353   24473 server.go:410] Version: v1.16.0
	Jul 25 20:39:39 old-k8s-version-20220725131610-44543 kubelet[24473]: I0725 20:39:39.613537   24473 plugins.go:100] No cloud provider specified.
	Jul 25 20:39:39 old-k8s-version-20220725131610-44543 kubelet[24473]: I0725 20:39:39.613549   24473 server.go:773] Client rotation is on, will bootstrap in background
	Jul 25 20:39:39 old-k8s-version-20220725131610-44543 kubelet[24473]: I0725 20:39:39.615324   24473 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 25 20:39:39 old-k8s-version-20220725131610-44543 kubelet[24473]: W0725 20:39:39.615974   24473 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jul 25 20:39:39 old-k8s-version-20220725131610-44543 kubelet[24473]: W0725 20:39:39.616035   24473 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jul 25 20:39:39 old-k8s-version-20220725131610-44543 kubelet[24473]: F0725 20:39:39.616060   24473 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jul 25 20:39:39 old-k8s-version-20220725131610-44543 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 25 20:39:39 old-k8s-version-20220725131610-44543 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 13:39:39.363409   62255 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220725131610-44543 -n old-k8s-version-20220725131610-44543
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220725131610-44543 -n old-k8s-version-20220725131610-44543: exit status 2 (449.650295ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-20220725131610-44543" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (575.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (43.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-20220725132539-44543 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220725132539-44543 -n embed-certs-20220725132539-44543

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220725132539-44543 -n embed-certs-20220725132539-44543: exit status 2 (16.106965181s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-20220725132539-44543 -n embed-certs-20220725132539-44543

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-20220725132539-44543 -n embed-certs-20220725132539-44543: exit status 2 (16.109377838s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-20220725132539-44543 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220725132539-44543 -n embed-certs-20220725132539-44543
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-20220725132539-44543 -n embed-certs-20220725132539-44543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220725132539-44543
helpers_test.go:235: (dbg) docker inspect embed-certs-20220725132539-44543:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a26cbaf6dca3be2744afa29981920d6168c5fde76fe283eeaed07cfc93cee7e4",
	        "Created": "2022-07-25T20:25:47.150688175Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 272320,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-25T20:26:49.821137669Z",
	            "FinishedAt": "2022-07-25T20:26:47.876564535Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/a26cbaf6dca3be2744afa29981920d6168c5fde76fe283eeaed07cfc93cee7e4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a26cbaf6dca3be2744afa29981920d6168c5fde76fe283eeaed07cfc93cee7e4/hostname",
	        "HostsPath": "/var/lib/docker/containers/a26cbaf6dca3be2744afa29981920d6168c5fde76fe283eeaed07cfc93cee7e4/hosts",
	        "LogPath": "/var/lib/docker/containers/a26cbaf6dca3be2744afa29981920d6168c5fde76fe283eeaed07cfc93cee7e4/a26cbaf6dca3be2744afa29981920d6168c5fde76fe283eeaed07cfc93cee7e4-json.log",
	        "Name": "/embed-certs-20220725132539-44543",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20220725132539-44543:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20220725132539-44543",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/feab8737e663edaef6645a883c879cca5d2ef1241abf71121e302e4ffafe275a-init/diff:/var/lib/docker/overlay2/d54b85ebfe416686b2558e2807bd5eef62d8c3964c35bbc22b4965093a7c57f4/diff:/var/lib/docker/overlay2/5cb3ff6e7921802d23fd6cba2490df2c36857ad3d97eedcf6e537d1527a37f60/diff:/var/lib/docker/overlay2/e9074b1b2294cc5a00d3237646a71c4d30d89748eadbae26dd532d6a475c9851/diff:/var/lib/docker/overlay2/df8929c4859ecf2c8807040b0174a3f0dfa8da01532ea961cd7f4f2f0ccfbfc2/diff:/var/lib/docker/overlay2/177329168f9a1043ce94a2b7b32958779e8f685e85abf8e4c335d03d23295b26/diff:/var/lib/docker/overlay2/9eef19133d2d5e0111523234481658c1f73d7552db59842ad9ca687debda8fb3/diff:/var/lib/docker/overlay2/5c3a4cdf915d1e65e3003feefb9b9e649b432658e093e04971d5cb79e610dba9/diff:/var/lib/docker/overlay2/d1aeff9fecf983db994270b4acf2a2310c790c64d42c58d436aec3238f88b36c/diff:/var/lib/docker/overlay2/580dc19745fb3c1352983857ac5a555954893045702e7867e651e15a2d6d8732/diff:/var/lib/docker/overlay2/6c77b3
2028ad3ef74c2faf89b5080529c0391907ee9c1d7bbc00c25a5340238b/diff:/var/lib/docker/overlay2/e9e20374d05015c7a4f19eb8e14e663122cd2aef66d761bedfdef138551230b8/diff:/var/lib/docker/overlay2/cd6d64933d189089a7aaf08d7c7db50b89cc981d03b8272e4fdedb65495c3e4c/diff:/var/lib/docker/overlay2/768244a14ae888bb2a08747182035c24bd2a6a7a67f1ddae7b4e60efccb01339/diff:/var/lib/docker/overlay2/f90a95060678580a1a90ee9a563996249f7606bffcd06b680ef15234b56ab4a6/diff:/var/lib/docker/overlay2/49310159b9be5ac9bfa1dca18d816535ceb12e228963adafcf71f9d7d52d95ec/diff:/var/lib/docker/overlay2/cef5c40c08cddf01fe5ea252f4a28586c0bd1296ddc38fc37fc7e34af83c3c30/diff:/var/lib/docker/overlay2/b31bdcb9b1a69c5230400a4e1f5650c1cc8f9e27e673ff43708f9a450106733e/diff:/var/lib/docker/overlay2/899516301b3ffc21aee7cb9984f97d944fe5af0f2cd0bc5a5895bf187e4bff72/diff:/var/lib/docker/overlay2/f1bd0d2fc2ce458c0b6364dde56610c3d26bf8dc2cca2ae4e34a46f173f44595/diff:/var/lib/docker/overlay2/9f0da14f9c19d5a719554996b34f3ef80a0378921181aaf89ca41339ccc5bee3/diff:/var/lib/d
ocker/overlay2/4d6e8c40f7c65cbcfa827b62273eb37d2a0bb0407ee161b80523f11c157a52d4/diff:/var/lib/docker/overlay2/434355bebe182fb5c97b07ad8cf7a59b5474f8bc71379ff4f9690ffe31837e63/diff:/var/lib/docker/overlay2/128fb199261eb4fea9e79111861223ddb0d84438454cb7c0b9a61cc73d0796c3/diff:/var/lib/docker/overlay2/11c9cb7e5f15ac43115cb739da1135a53e08dc94ee1da27441e1d6c79eb73e62/diff:/var/lib/docker/overlay2/5e6bf225209b025da3db5a2d8862c3a0ee64417fa809d026e010644fc6619b48/diff:/var/lib/docker/overlay2/265a2d12f86abb8a607a06ec6f225186dfe622f7e23cbbf146a2ceffc7bc9bdc/diff:/var/lib/docker/overlay2/e9140b8aea381db0faf833cd2d12ff40267febe63f7c6bc2fa27384dbadf89bc/diff:/var/lib/docker/overlay2/07d7d0ab38cb08c95279e0706a65a6a4aeff8b5e72d559f8bc3dfcc0895654d6/diff:/var/lib/docker/overlay2/3a3d8346fb066863283ad39cf7f43615d7a1e99dc12aa7e8892d85d1c550bc63/diff:/var/lib/docker/overlay2/e5317b212815c832ebd2451ddd6e1f6922f350c5c1651ccfef7c5a8f34b7c1e3/diff:/var/lib/docker/overlay2/3bd7d98f0f0bf7ad2e29344a6ab4ca3211616e390da35077e76565c2074
df852/diff:/var/lib/docker/overlay2/ff63d2045cce1bb51b201079bba9d2b2a666b7331699b9407801b2fc00792bb3/diff:/var/lib/docker/overlay2/545bdf3f77f20e3db68875eda0a8829facb605119a630956ab79323688a3562a/diff:/var/lib/docker/overlay2/8f9b40124ec5ada59184358938542e028ca1e234ef1f6bce1816becf6ec8354e/diff:/var/lib/docker/overlay2/71b919cd2ece4207294cb577adab54fdba87cd9f7fdca1a53d6f376f87f11151/diff:/var/lib/docker/overlay2/b2d4a34fe28bd395d9b4ba0120173fc9df90c36a8a60a309ec088eca8930768c/diff:/var/lib/docker/overlay2/e986f180cbc34391c1ea6665283859b4c3c3061156ea1ff7196ffd869741e591/diff:/var/lib/docker/overlay2/cb98df203d42ea558f11fe84300b687cb65e6f3401b377a4466c9c8ef276ec59/diff:/var/lib/docker/overlay2/6371f9c0528fb1fb4a17bfcf3a34877fdd6d43dca1b5f06aff998d43d2010c46/diff:/var/lib/docker/overlay2/79ceef8e89c4094715c05bafb8c1427d09fb4a99f71f1a6186299303af352c98/diff:/var/lib/docker/overlay2/e034e76eeaff4e68d5158ffe54b55d122124ad82998be242fa08bec3010a0e64/diff",
	                "MergedDir": "/var/lib/docker/overlay2/feab8737e663edaef6645a883c879cca5d2ef1241abf71121e302e4ffafe275a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/feab8737e663edaef6645a883c879cca5d2ef1241abf71121e302e4ffafe275a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/feab8737e663edaef6645a883c879cca5d2ef1241abf71121e302e4ffafe275a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20220725132539-44543",
	                "Source": "/var/lib/docker/volumes/embed-certs-20220725132539-44543/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20220725132539-44543",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20220725132539-44543",
	                "name.minikube.sigs.k8s.io": "embed-certs-20220725132539-44543",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f1761eaa7d1b2b77ab78790376d6fa7503f1514dcb85774044a1ed29a4cee40c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59422"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59423"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59424"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59425"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59426"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f1761eaa7d1b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20220725132539-44543": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "a26cbaf6dca3",
	                        "embed-certs-20220725132539-44543"
	                    ],
	                    "NetworkID": "b9478eb32f8b0a21795cbbbab1e802bcb76d9edc2f3ea05b264734f7d0a9eaf5",
	                    "EndpointID": "79c6dcaa036d2e9475158bdb7afc0294181cc139928c22f65085c3db00a5932e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220725132539-44543 -n embed-certs-20220725132539-44543
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p embed-certs-20220725132539-44543 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p embed-certs-20220725132539-44543 logs -n 25: (2.858632348s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|--------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |               Profile                |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|--------------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                | old-k8s-version-20220725131610-44543 | jenkins | v1.26.0 | 25 Jul 22 13:16 PDT |                     |
	|         | old-k8s-version-20220725131610-44543              |                                      |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                      |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                      |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                      |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                      |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                      |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                      |         |         |                     |                     |
	| ssh     | -p                                                | kubenet-20220725125922-44543         | jenkins | v1.26.0 | 25 Jul 22 13:16 PDT | 25 Jul 22 13:16 PDT |
	|         | kubenet-20220725125922-44543                      |                                      |         |         |                     |                     |
	|         | pgrep -a kubelet                                  |                                      |         |         |                     |                     |
	| delete  | -p                                                | kubenet-20220725125922-44543         | jenkins | v1.26.0 | 25 Jul 22 13:17 PDT | 25 Jul 22 13:17 PDT |
	|         | kubenet-20220725125922-44543                      |                                      |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220725131741-44543      | jenkins | v1.26.0 | 25 Jul 22 13:17 PDT | 25 Jul 22 13:19 PDT |
	|         | no-preload-20220725131741-44543                   |                                      |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                      |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                      |         |         |                     |                     |
	|         | --driver=docker                                   |                                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |                                      |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220725131741-44543      | jenkins | v1.26.0 | 25 Jul 22 13:19 PDT | 25 Jul 22 13:19 PDT |
	|         | no-preload-20220725131741-44543                   |                                      |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                      |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                      |         |         |                     |                     |
	| stop    | -p                                                | no-preload-20220725131741-44543      | jenkins | v1.26.0 | 25 Jul 22 13:19 PDT | 25 Jul 22 13:19 PDT |
	|         | no-preload-20220725131741-44543                   |                                      |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                      |         |         |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220725131741-44543      | jenkins | v1.26.0 | 25 Jul 22 13:19 PDT | 25 Jul 22 13:19 PDT |
	|         | no-preload-20220725131741-44543                   |                                      |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                      |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220725131741-44543      | jenkins | v1.26.0 | 25 Jul 22 13:19 PDT | 25 Jul 22 13:24 PDT |
	|         | no-preload-20220725131741-44543                   |                                      |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                      |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                      |         |         |                     |                     |
	|         | --driver=docker                                   |                                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |                                      |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | old-k8s-version-20220725131610-44543 | jenkins | v1.26.0 | 25 Jul 22 13:20 PDT |                     |
	|         | old-k8s-version-20220725131610-44543              |                                      |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                      |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                      |         |         |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220725131610-44543 | jenkins | v1.26.0 | 25 Jul 22 13:21 PDT | 25 Jul 22 13:21 PDT |
	|         | old-k8s-version-20220725131610-44543              |                                      |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                      |         |         |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220725131610-44543 | jenkins | v1.26.0 | 25 Jul 22 13:21 PDT | 25 Jul 22 13:21 PDT |
	|         | old-k8s-version-20220725131610-44543              |                                      |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                      |         |         |                     |                     |
	| start   | -p                                                | old-k8s-version-20220725131610-44543 | jenkins | v1.26.0 | 25 Jul 22 13:21 PDT |                     |
	|         | old-k8s-version-20220725131610-44543              |                                      |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                      |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                      |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                      |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                      |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                      |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                      |         |         |                     |                     |
	| ssh     | -p                                                | no-preload-20220725131741-44543      | jenkins | v1.26.0 | 25 Jul 22 13:24 PDT | 25 Jul 22 13:24 PDT |
	|         | no-preload-20220725131741-44543                   |                                      |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                      |         |         |                     |                     |
	| pause   | -p                                                | no-preload-20220725131741-44543      | jenkins | v1.26.0 | 25 Jul 22 13:24 PDT | 25 Jul 22 13:24 PDT |
	|         | no-preload-20220725131741-44543                   |                                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                      |         |         |                     |                     |
	| unpause | -p                                                | no-preload-20220725131741-44543      | jenkins | v1.26.0 | 25 Jul 22 13:25 PDT | 25 Jul 22 13:25 PDT |
	|         | no-preload-20220725131741-44543                   |                                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                      |         |         |                     |                     |
	| delete  | -p                                                | no-preload-20220725131741-44543      | jenkins | v1.26.0 | 25 Jul 22 13:25 PDT | 25 Jul 22 13:25 PDT |
	|         | no-preload-20220725131741-44543                   |                                      |         |         |                     |                     |
	| delete  | -p                                                | no-preload-20220725131741-44543      | jenkins | v1.26.0 | 25 Jul 22 13:25 PDT | 25 Jul 22 13:25 PDT |
	|         | no-preload-20220725131741-44543                   |                                      |         |         |                     |                     |
	| start   | -p                                                | embed-certs-20220725132539-44543     | jenkins | v1.26.0 | 25 Jul 22 13:25 PDT | 25 Jul 22 13:26 PDT |
	|         | embed-certs-20220725132539-44543                  |                                      |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                      |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                      |         |         |                     |                     |
	|         | --driver=docker                                   |                                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |                                      |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | embed-certs-20220725132539-44543     | jenkins | v1.26.0 | 25 Jul 22 13:26 PDT | 25 Jul 22 13:26 PDT |
	|         | embed-certs-20220725132539-44543                  |                                      |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                      |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                      |         |         |                     |                     |
	| stop    | -p                                                | embed-certs-20220725132539-44543     | jenkins | v1.26.0 | 25 Jul 22 13:26 PDT | 25 Jul 22 13:26 PDT |
	|         | embed-certs-20220725132539-44543                  |                                      |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                      |         |         |                     |                     |
	| addons  | enable dashboard -p                               | embed-certs-20220725132539-44543     | jenkins | v1.26.0 | 25 Jul 22 13:26 PDT | 25 Jul 22 13:26 PDT |
	|         | embed-certs-20220725132539-44543                  |                                      |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                      |         |         |                     |                     |
	| start   | -p                                                | embed-certs-20220725132539-44543     | jenkins | v1.26.0 | 25 Jul 22 13:26 PDT | 25 Jul 22 13:31 PDT |
	|         | embed-certs-20220725132539-44543                  |                                      |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                      |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                      |         |         |                     |                     |
	|         | --driver=docker                                   |                                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |                                      |         |         |                     |                     |
	| ssh     | -p                                                | embed-certs-20220725132539-44543     | jenkins | v1.26.0 | 25 Jul 22 13:32 PDT | 25 Jul 22 13:32 PDT |
	|         | embed-certs-20220725132539-44543                  |                                      |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                      |         |         |                     |                     |
	| pause   | -p                                                | embed-certs-20220725132539-44543     | jenkins | v1.26.0 | 25 Jul 22 13:32 PDT | 25 Jul 22 13:32 PDT |
	|         | embed-certs-20220725132539-44543                  |                                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                      |         |         |                     |                     |
	| unpause | -p                                                | embed-certs-20220725132539-44543     | jenkins | v1.26.0 | 25 Jul 22 13:32 PDT | 25 Jul 22 13:32 PDT |
	|         | embed-certs-20220725132539-44543                  |                                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                      |         |         |                     |                     |
	|---------|---------------------------------------------------|--------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/25 13:26:48
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 13:26:48.547427   60896 out.go:296] Setting OutFile to fd 1 ...
	I0725 13:26:48.547663   60896 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 13:26:48.547668   60896 out.go:309] Setting ErrFile to fd 2...
	I0725 13:26:48.547672   60896 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 13:26:48.547782   60896 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/bin
	I0725 13:26:48.548312   60896 out.go:303] Setting JSON to false
	I0725 13:26:48.563654   60896 start.go:115] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":15980,"bootTime":1658764828,"procs":355,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0725 13:26:48.563800   60896 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0725 13:26:48.585799   60896 out.go:177] * [embed-certs-20220725132539-44543] minikube v1.26.0 on Darwin 12.4
	I0725 13:26:48.627711   60896 notify.go:193] Checking for updates...
	I0725 13:26:48.648811   60896 out.go:177]   - MINIKUBE_LOCATION=14555
	I0725 13:26:48.669719   60896 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	I0725 13:26:48.690638   60896 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 13:26:48.712084   60896 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 13:26:48.734191   60896 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube
	I0725 13:26:48.756550   60896 config.go:178] Loaded profile config "embed-certs-20220725132539-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0725 13:26:48.757217   60896 driver.go:365] Setting default libvirt URI to qemu:///system
	I0725 13:26:48.825997   60896 docker.go:137] docker version: linux-20.10.17
	I0725 13:26:48.826132   60896 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 13:26:48.960295   60896 info.go:265] docker info: {ID:DEPZ:X5EQ:JUVI:7TAY:IXPP:M3QT:KGJK:NK3N:YSWM:HAHS:YLUF:RBE6 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 20:26:48.899440621 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 13:26:49.004155   60896 out.go:177] * Using the docker driver based on existing profile
	I0725 13:26:49.025981   60896 start.go:284] selected driver: docker
	I0725 13:26:49.026017   60896 start.go:808] validating driver "docker" against &{Name:embed-certs-20220725132539-44543 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:embed-certs-20220725132539-44543 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s Schedu
ledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 13:26:49.026150   60896 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 13:26:49.029491   60896 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 13:26:49.162003   60896 info.go:265] docker info: {ID:DEPZ:X5EQ:JUVI:7TAY:IXPP:M3QT:KGJK:NK3N:YSWM:HAHS:YLUF:RBE6 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 20:26:49.103146968 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 13:26:49.162174   60896 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 13:26:49.162190   60896 cni.go:95] Creating CNI manager for ""
	I0725 13:26:49.162199   60896 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 13:26:49.162223   60896 start_flags.go:310] config:
	{Name:embed-certs-20220725132539-44543 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:embed-certs-20220725132539-44543 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cl
uster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 13:26:49.205870   60896 out.go:177] * Starting control plane node embed-certs-20220725132539-44543 in cluster embed-certs-20220725132539-44543
	I0725 13:26:49.226856   60896 cache.go:120] Beginning downloading kic base image for docker with docker
	I0725 13:26:49.249040   60896 out.go:177] * Pulling base image ...
	I0725 13:26:49.291616   60896 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0725 13:26:49.291652   60896 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0725 13:26:49.291684   60896 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4
	I0725 13:26:49.291697   60896 cache.go:57] Caching tarball of preloaded images
	I0725 13:26:49.291833   60896 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0725 13:26:49.291855   60896 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.2 on docker
	I0725 13:26:49.292505   60896 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/embed-certs-20220725132539-44543/config.json ...
	I0725 13:26:49.355938   60896 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0725 13:26:49.355966   60896 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0725 13:26:49.355978   60896 cache.go:208] Successfully downloaded all kic artifacts
	I0725 13:26:49.356021   60896 start.go:370] acquiring machines lock for embed-certs-20220725132539-44543: {Name:mkedcda8c6ffd244a6eb5ea62b1d8110eb07449c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 13:26:49.356105   60896 start.go:374] acquired machines lock for "embed-certs-20220725132539-44543" in 59.916µs
	I0725 13:26:49.356125   60896 start.go:95] Skipping create...Using existing machine configuration
	I0725 13:26:49.356136   60896 fix.go:55] fixHost starting: 
	I0725 13:26:49.356360   60896 cli_runner.go:164] Run: docker container inspect embed-certs-20220725132539-44543 --format={{.State.Status}}
	I0725 13:26:49.424125   60896 fix.go:103] recreateIfNeeded on embed-certs-20220725132539-44543: state=Stopped err=<nil>
	W0725 13:26:49.424176   60896 fix.go:129] unexpected machine state, will restart: <nil>
	I0725 13:26:49.446453   60896 out.go:177] * Restarting existing docker container for "embed-certs-20220725132539-44543" ...
	I0725 13:26:49.468132   60896 cli_runner.go:164] Run: docker start embed-certs-20220725132539-44543
	I0725 13:26:49.813394   60896 cli_runner.go:164] Run: docker container inspect embed-certs-20220725132539-44543 --format={{.State.Status}}
	I0725 13:26:49.886745   60896 kic.go:415] container "embed-certs-20220725132539-44543" state is running.
	I0725 13:26:49.887403   60896 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220725132539-44543
	I0725 13:26:49.963095   60896 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/embed-certs-20220725132539-44543/config.json ...
	I0725 13:26:49.963502   60896 machine.go:88] provisioning docker machine ...
	I0725 13:26:49.963527   60896 ubuntu.go:169] provisioning hostname "embed-certs-20220725132539-44543"
	I0725 13:26:49.963596   60896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725132539-44543
	I0725 13:26:50.039063   60896 main.go:134] libmachine: Using SSH client type: native
	I0725 13:26:50.039288   60896 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 59422 <nil> <nil>}
	I0725 13:26:50.039301   60896 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220725132539-44543 && echo "embed-certs-20220725132539-44543" | sudo tee /etc/hostname
	I0725 13:26:50.170431   60896 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220725132539-44543
	
	I0725 13:26:50.170514   60896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725132539-44543
	I0725 13:26:50.246235   60896 main.go:134] libmachine: Using SSH client type: native
	I0725 13:26:50.246398   60896 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 59422 <nil> <nil>}
	I0725 13:26:50.246415   60896 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220725132539-44543' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220725132539-44543/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220725132539-44543' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 13:26:50.365664   60896 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 13:26:50.365688   60896 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube}
	I0725 13:26:50.365709   60896 ubuntu.go:177] setting up certificates
	I0725 13:26:50.365719   60896 provision.go:83] configureAuth start
	I0725 13:26:50.365796   60896 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220725132539-44543
	I0725 13:26:50.440349   60896 provision.go:138] copyHostCerts
	I0725 13:26:50.440475   60896 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem, removing ...
	I0725 13:26:50.440485   60896 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem
	I0725 13:26:50.440587   60896 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem (1082 bytes)
	I0725 13:26:50.440815   60896 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem, removing ...
	I0725 13:26:50.440830   60896 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem
	I0725 13:26:50.440890   60896 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem (1123 bytes)
	I0725 13:26:50.441056   60896 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem, removing ...
	I0725 13:26:50.441062   60896 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem
	I0725 13:26:50.441120   60896 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem (1675 bytes)
	I0725 13:26:50.441275   60896 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220725132539-44543 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220725132539-44543]
	I0725 13:26:50.557687   60896 provision.go:172] copyRemoteCerts
	I0725 13:26:50.557751   60896 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 13:26:50.557825   60896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725132539-44543
	I0725 13:26:50.629344   60896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59422 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/embed-certs-20220725132539-44543/id_rsa Username:docker}
	I0725 13:26:50.718627   60896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0725 13:26:50.735715   60896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0725 13:26:50.751806   60896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0725 13:26:50.768118   60896 provision.go:86] duration metric: configureAuth took 402.373037ms
	I0725 13:26:50.768132   60896 ubuntu.go:193] setting minikube options for container-runtime
	I0725 13:26:50.768266   60896 config.go:178] Loaded profile config "embed-certs-20220725132539-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0725 13:26:50.768315   60896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725132539-44543
	I0725 13:26:50.840378   60896 main.go:134] libmachine: Using SSH client type: native
	I0725 13:26:50.840536   60896 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 59422 <nil> <nil>}
	I0725 13:26:50.840548   60896 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 13:26:50.965802   60896 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0725 13:26:50.965819   60896 ubuntu.go:71] root file system type: overlay
	I0725 13:26:50.966002   60896 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 13:26:50.966080   60896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725132539-44543
	I0725 13:26:51.036849   60896 main.go:134] libmachine: Using SSH client type: native
	I0725 13:26:51.036995   60896 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 59422 <nil> <nil>}
	I0725 13:26:51.037043   60896 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 13:26:51.167067   60896 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 13:26:51.167151   60896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725132539-44543
	I0725 13:26:51.237871   60896 main.go:134] libmachine: Using SSH client type: native
	I0725 13:26:51.238049   60896 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 59422 <nil> <nil>}
	I0725 13:26:51.238062   60896 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 13:26:51.363554   60896 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 13:26:51.363567   60896 machine.go:91] provisioned docker machine in 1.400015538s
	I0725 13:26:51.363577   60896 start.go:307] post-start starting for "embed-certs-20220725132539-44543" (driver="docker")
	I0725 13:26:51.363582   60896 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 13:26:51.363643   60896 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 13:26:51.363691   60896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725132539-44543
	I0725 13:26:51.437205   60896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59422 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/embed-certs-20220725132539-44543/id_rsa Username:docker}
	I0725 13:26:51.527183   60896 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 13:26:51.530742   60896 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0725 13:26:51.530759   60896 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0725 13:26:51.530765   60896 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0725 13:26:51.530770   60896 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0725 13:26:51.530783   60896 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/addons for local assets ...
	I0725 13:26:51.530909   60896 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files for local assets ...
	I0725 13:26:51.531049   60896 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem -> 445432.pem in /etc/ssl/certs
	I0725 13:26:51.531209   60896 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 13:26:51.538152   60896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem --> /etc/ssl/certs/445432.pem (1708 bytes)
	I0725 13:26:51.555946   60896 start.go:310] post-start completed in 192.354602ms
	I0725 13:26:51.556040   60896 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 13:26:51.556105   60896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725132539-44543
	I0725 13:26:51.627974   60896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59422 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/embed-certs-20220725132539-44543/id_rsa Username:docker}
	I0725 13:26:51.714198   60896 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0725 13:26:51.718597   60896 fix.go:57] fixHost completed within 2.362392942s
	I0725 13:26:51.718610   60896 start.go:82] releasing machines lock for "embed-certs-20220725132539-44543", held for 2.362429297s
	I0725 13:26:51.718700   60896 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220725132539-44543
	I0725 13:26:51.789671   60896 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0725 13:26:51.789678   60896 ssh_runner.go:195] Run: systemctl --version
	I0725 13:26:51.789751   60896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725132539-44543
	I0725 13:26:51.789757   60896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725132539-44543
	I0725 13:26:51.866411   60896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59422 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/embed-certs-20220725132539-44543/id_rsa Username:docker}
	I0725 13:26:51.867863   60896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59422 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/embed-certs-20220725132539-44543/id_rsa Username:docker}
	I0725 13:26:52.170482   60896 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 13:26:52.180365   60896 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0725 13:26:52.180423   60896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 13:26:52.191899   60896 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 13:26:52.204163   60896 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 13:26:52.271546   60896 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 13:26:52.341953   60896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 13:26:52.404515   60896 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 13:26:52.623120   60896 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0725 13:26:52.693995   60896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 13:26:52.758221   60896 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0725 13:26:52.767593   60896 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0725 13:26:52.767655   60896 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0725 13:26:52.771384   60896 start.go:471] Will wait 60s for crictl version
	I0725 13:26:52.771432   60896 ssh_runner.go:195] Run: sudo crictl version
	I0725 13:26:52.874937   60896 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0725 13:26:52.875000   60896 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 13:26:52.909296   60896 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 13:26:52.986216   60896 out.go:204] * Preparing Kubernetes v1.24.2 on Docker 20.10.17 ...
	I0725 13:26:52.986400   60896 cli_runner.go:164] Run: docker exec -t embed-certs-20220725132539-44543 dig +short host.docker.internal
	I0725 13:26:53.115923   60896 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0725 13:26:53.116029   60896 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0725 13:26:53.121448   60896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 13:26:53.131166   60896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220725132539-44543
	I0725 13:26:53.203036   60896 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0725 13:26:53.203111   60896 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 13:26:53.232252   60896 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.2
	k8s.gcr.io/kube-proxy:v1.24.2
	k8s.gcr.io/kube-scheduler:v1.24.2
	k8s.gcr.io/kube-controller-manager:v1.24.2
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0725 13:26:53.232269   60896 docker.go:542] Images already preloaded, skipping extraction
	I0725 13:26:53.232348   60896 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 13:26:53.262252   60896 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.2
	k8s.gcr.io/kube-scheduler:v1.24.2
	k8s.gcr.io/kube-controller-manager:v1.24.2
	k8s.gcr.io/kube-proxy:v1.24.2
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0725 13:26:53.262272   60896 cache_images.go:84] Images are preloaded, skipping loading
	I0725 13:26:53.262351   60896 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0725 13:26:53.333671   60896 cni.go:95] Creating CNI manager for ""
	I0725 13:26:53.333682   60896 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 13:26:53.333696   60896 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0725 13:26:53.333709   60896 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220725132539-44543 NodeName:embed-certs-20220725132539-44543 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile
:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0725 13:26:53.333811   60896 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "embed-certs-20220725132539-44543"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 13:26:53.333903   60896 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=embed-certs-20220725132539-44543 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.2 ClusterName:embed-certs-20220725132539-44543 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0725 13:26:53.333962   60896 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
	I0725 13:26:53.341316   60896 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 13:26:53.341375   60896 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 13:26:53.348729   60896 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (494 bytes)
	I0725 13:26:53.360708   60896 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 13:26:53.372857   60896 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2054 bytes)
	I0725 13:26:53.385380   60896 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0725 13:26:53.388890   60896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 13:26:53.398360   60896 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/embed-certs-20220725132539-44543 for IP: 192.168.76.2
	I0725 13:26:53.398470   60896 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.key
	I0725 13:26:53.398520   60896 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.key
	I0725 13:26:53.398593   60896 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/embed-certs-20220725132539-44543/client.key
	I0725 13:26:53.398650   60896 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/embed-certs-20220725132539-44543/apiserver.key.31bdca25
	I0725 13:26:53.398698   60896 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/embed-certs-20220725132539-44543/proxy-client.key
	I0725 13:26:53.398918   60896 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543.pem (1338 bytes)
	W0725 13:26:53.398960   60896 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543_empty.pem, impossibly tiny 0 bytes
	I0725 13:26:53.398971   60896 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 13:26:53.399004   60896 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem (1082 bytes)
	I0725 13:26:53.399033   60896 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem (1123 bytes)
	I0725 13:26:53.399058   60896 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem (1675 bytes)
	I0725 13:26:53.399119   60896 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem (1708 bytes)
	I0725 13:26:53.399636   60896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/embed-certs-20220725132539-44543/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0725 13:26:53.416223   60896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/embed-certs-20220725132539-44543/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 13:26:53.432572   60896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/embed-certs-20220725132539-44543/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 13:26:53.449196   60896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/embed-certs-20220725132539-44543/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 13:26:53.465993   60896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 13:26:53.482339   60896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 13:26:53.498714   60896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 13:26:53.515036   60896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0725 13:26:53.531395   60896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543.pem --> /usr/share/ca-certificates/44543.pem (1338 bytes)
	I0725 13:26:53.547950   60896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem --> /usr/share/ca-certificates/445432.pem (1708 bytes)
	I0725 13:26:53.587127   60896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 13:26:53.603886   60896 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 13:26:53.616328   60896 ssh_runner.go:195] Run: openssl version
	I0725 13:26:53.621375   60896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44543.pem && ln -fs /usr/share/ca-certificates/44543.pem /etc/ssl/certs/44543.pem"
	I0725 13:26:53.628836   60896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44543.pem
	I0725 13:26:53.632532   60896 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 25 19:24 /usr/share/ca-certificates/44543.pem
	I0725 13:26:53.632580   60896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44543.pem
	I0725 13:26:53.637683   60896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/44543.pem /etc/ssl/certs/51391683.0"
	I0725 13:26:53.644581   60896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/445432.pem && ln -fs /usr/share/ca-certificates/445432.pem /etc/ssl/certs/445432.pem"
	I0725 13:26:53.652216   60896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/445432.pem
	I0725 13:26:53.655971   60896 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 25 19:24 /usr/share/ca-certificates/445432.pem
	I0725 13:26:53.656010   60896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/445432.pem
	I0725 13:26:53.661284   60896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/445432.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 13:26:53.668359   60896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 13:26:53.676006   60896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 13:26:53.679917   60896 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 25 19:19 /usr/share/ca-certificates/minikubeCA.pem
	I0725 13:26:53.680017   60896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 13:26:53.685793   60896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 13:26:53.692646   60896 kubeadm.go:395] StartCluster: {Name:embed-certs-20220725132539-44543 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:embed-certs-20220725132539-44543 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expose
dPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 13:26:53.692743   60896 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 13:26:53.721978   60896 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 13:26:53.729370   60896 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0725 13:26:53.729381   60896 kubeadm.go:626] restartCluster start
	I0725 13:26:53.729418   60896 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 13:26:53.736072   60896 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:53.736123   60896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220725132539-44543
	I0725 13:26:53.808101   60896 kubeconfig.go:116] verify returned: extract IP: "embed-certs-20220725132539-44543" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	I0725 13:26:53.808262   60896 kubeconfig.go:127] "embed-certs-20220725132539-44543" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig - will repair!
	I0725 13:26:53.808621   60896 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig: {Name:mk33e723b045d7cbb2c524ed31a7e78a4a0b6415 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 13:26:53.809797   60896 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 13:26:53.817648   60896 api_server.go:165] Checking apiserver status ...
	I0725 13:26:53.817713   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:26:53.826733   60896 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:54.027462   60896 api_server.go:165] Checking apiserver status ...
	I0725 13:26:54.027716   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:26:54.038403   60896 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:54.227398   60896 api_server.go:165] Checking apiserver status ...
	I0725 13:26:54.227644   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:26:54.238236   60896 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:54.427576   60896 api_server.go:165] Checking apiserver status ...
	I0725 13:26:54.427755   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:26:54.438756   60896 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:54.627358   60896 api_server.go:165] Checking apiserver status ...
	I0725 13:26:54.627497   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:26:54.636422   60896 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:54.827394   60896 api_server.go:165] Checking apiserver status ...
	I0725 13:26:54.827487   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:26:54.838488   60896 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:55.026933   60896 api_server.go:165] Checking apiserver status ...
	I0725 13:26:55.027049   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:26:55.037485   60896 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:55.226880   60896 api_server.go:165] Checking apiserver status ...
	I0725 13:26:55.226957   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:26:55.235857   60896 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:55.427840   60896 api_server.go:165] Checking apiserver status ...
	I0725 13:26:55.428001   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:26:55.438429   60896 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:55.628967   60896 api_server.go:165] Checking apiserver status ...
	I0725 13:26:55.629079   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:26:55.639603   60896 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:55.828963   60896 api_server.go:165] Checking apiserver status ...
	I0725 13:26:55.829161   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:26:55.839558   60896 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:56.027028   60896 api_server.go:165] Checking apiserver status ...
	I0725 13:26:56.027119   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:26:56.037616   60896 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:56.229013   60896 api_server.go:165] Checking apiserver status ...
	I0725 13:26:56.229229   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:26:56.239416   60896 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:56.427054   60896 api_server.go:165] Checking apiserver status ...
	I0725 13:26:56.427246   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:26:56.437328   60896 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:56.628631   60896 api_server.go:165] Checking apiserver status ...
	I0725 13:26:56.628739   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:26:56.639033   60896 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:56.826934   60896 api_server.go:165] Checking apiserver status ...
	I0725 13:26:56.826996   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:26:56.835856   60896 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:56.835867   60896 api_server.go:165] Checking apiserver status ...
	I0725 13:26:56.835926   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:26:56.844515   60896 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:56.844534   60896 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0725 13:26:56.844542   60896 kubeadm.go:1092] stopping kube-system containers ...
	I0725 13:26:56.844600   60896 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 13:26:56.878986   60896 docker.go:443] Stopping containers: [c3d79be36829 1c51316a3481 68bdaf34cc2f 7c6f3ac7c5f3 79f7459bb476 b7549c872bf4 ff7140875b14 b8bc65908490 2261dd283394 99e1f7baa7d0 a2c3192c3c39 f358469cafac 4eba7ec75371 e84371b0922e 3ff0cb9c7d63 22853dac1834]
	I0725 13:26:56.879060   60896 ssh_runner.go:195] Run: docker stop c3d79be36829 1c51316a3481 68bdaf34cc2f 7c6f3ac7c5f3 79f7459bb476 b7549c872bf4 ff7140875b14 b8bc65908490 2261dd283394 99e1f7baa7d0 a2c3192c3c39 f358469cafac 4eba7ec75371 e84371b0922e 3ff0cb9c7d63 22853dac1834
	I0725 13:26:56.908333   60896 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 13:26:56.918203   60896 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 13:26:56.925547   60896 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jul 25 20:25 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jul 25 20:25 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2067 Jul 25 20:26 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jul 25 20:25 /etc/kubernetes/scheduler.conf
	
	I0725 13:26:56.925599   60896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 13:26:56.932577   60896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 13:26:56.939336   60896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 13:26:56.946087   60896 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:56.946134   60896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 13:26:56.952735   60896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 13:26:56.959517   60896 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:56.959565   60896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 13:26:56.965970   60896 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 13:26:56.972930   60896 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0725 13:26:56.972940   60896 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:26:57.017926   60896 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:26:57.767698   60896 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:26:57.943236   60896 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:26:57.991345   60896 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:26:58.050211   60896 api_server.go:51] waiting for apiserver process to appear ...
	I0725 13:26:58.050286   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:26:58.582245   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:26:59.082479   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:26:59.094188   60896 api_server.go:71] duration metric: took 1.043948998s to wait for apiserver process to appear ...
	I0725 13:26:59.094209   60896 api_server.go:87] waiting for apiserver healthz status ...
	I0725 13:26:59.094231   60896 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59426/healthz ...
	I0725 13:26:59.095509   60896 api_server.go:256] stopped: https://127.0.0.1:59426/healthz: Get "https://127.0.0.1:59426/healthz": EOF
	I0725 13:26:59.596003   60896 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59426/healthz ...
	I0725 13:27:02.411623   60896 api_server.go:266] https://127.0.0.1:59426/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0725 13:27:02.411651   60896 api_server.go:102] status: https://127.0.0.1:59426/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0725 13:27:02.597905   60896 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59426/healthz ...
	I0725 13:27:02.607032   60896 api_server.go:266] https://127.0.0.1:59426/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 13:27:02.607059   60896 api_server.go:102] status: https://127.0.0.1:59426/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 13:27:03.095805   60896 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59426/healthz ...
	I0725 13:27:03.102036   60896 api_server.go:266] https://127.0.0.1:59426/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 13:27:03.102057   60896 api_server.go:102] status: https://127.0.0.1:59426/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 13:27:03.595754   60896 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59426/healthz ...
	I0725 13:27:03.601497   60896 api_server.go:266] https://127.0.0.1:59426/healthz returned 200:
	ok
	I0725 13:27:03.613887   60896 api_server.go:140] control plane version: v1.24.2
	I0725 13:27:03.613902   60896 api_server.go:130] duration metric: took 4.51955617s to wait for apiserver health ...
	I0725 13:27:03.613908   60896 cni.go:95] Creating CNI manager for ""
	I0725 13:27:03.613912   60896 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 13:27:03.613920   60896 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 13:27:03.621732   60896 system_pods.go:59] 8 kube-system pods found
	I0725 13:27:03.621746   60896 system_pods.go:61] "coredns-6d4b75cb6d-htpr6" [ea0b0f7f-8b0a-4385-b505-e3122fe524b0] Running
	I0725 13:27:03.621754   60896 system_pods.go:61] "etcd-embed-certs-20220725132539-44543" [9d01d9cf-2802-46d5-8ca1-7a4e6c619232] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0725 13:27:03.621759   60896 system_pods.go:61] "kube-apiserver-embed-certs-20220725132539-44543" [89aebf00-48c5-4d71-b8c4-ad3faade9c36] Running
	I0725 13:27:03.621763   60896 system_pods.go:61] "kube-controller-manager-embed-certs-20220725132539-44543" [b27f6cdf-dc9b-4c22-820a-434d64ff35d1] Running
	I0725 13:27:03.621767   60896 system_pods.go:61] "kube-proxy-7pjkq" [7e1ad46c-cdbd-4109-956b-3250bf6a1a8e] Running
	I0725 13:27:03.621772   60896 system_pods.go:61] "kube-scheduler-embed-certs-20220725132539-44543" [946e68be-c055-4c90-bd5d-31c53b3534a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 13:27:03.621779   60896 system_pods.go:61] "metrics-server-5c6f97fb75-4xt92" [705f970d-49d5-4a4c-9e18-6da6f236cff5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 13:27:03.621783   60896 system_pods.go:61] "storage-provisioner" [4b92166d-6e5a-4692-b6e4-4269d858e8c3] Running
	I0725 13:27:03.621787   60896 system_pods.go:74] duration metric: took 7.862224ms to wait for pod list to return data ...
	I0725 13:27:03.621793   60896 node_conditions.go:102] verifying NodePressure condition ...
	I0725 13:27:03.624429   60896 node_conditions.go:122] node storage ephemeral capacity is 107077304Ki
	I0725 13:27:03.624445   60896 node_conditions.go:123] node cpu capacity is 6
	I0725 13:27:03.624453   60896 node_conditions.go:105] duration metric: took 2.656612ms to run NodePressure ...
	I0725 13:27:03.624470   60896 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:27:03.765029   60896 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0725 13:27:03.769671   60896 kubeadm.go:777] kubelet initialised
	I0725 13:27:03.769686   60896 kubeadm.go:778] duration metric: took 4.63572ms waiting for restarted kubelet to initialise ...
	I0725 13:27:03.769694   60896 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 13:27:03.774718   60896 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-htpr6" in "kube-system" namespace to be "Ready" ...
	I0725 13:27:03.779434   60896 pod_ready.go:92] pod "coredns-6d4b75cb6d-htpr6" in "kube-system" namespace has status "Ready":"True"
	I0725 13:27:03.779442   60896 pod_ready.go:81] duration metric: took 4.711352ms waiting for pod "coredns-6d4b75cb6d-htpr6" in "kube-system" namespace to be "Ready" ...
	I0725 13:27:03.779448   60896 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-20220725132539-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:27:05.795546   60896 pod_ready.go:102] pod "etcd-embed-certs-20220725132539-44543" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:07.796661   60896 pod_ready.go:102] pod "etcd-embed-certs-20220725132539-44543" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:09.797132   60896 pod_ready.go:102] pod "etcd-embed-certs-20220725132539-44543" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:11.797247   60896 pod_ready.go:102] pod "etcd-embed-certs-20220725132539-44543" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:13.797607   60896 pod_ready.go:92] pod "etcd-embed-certs-20220725132539-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:27:13.797620   60896 pod_ready.go:81] duration metric: took 10.017876261s waiting for pod "etcd-embed-certs-20220725132539-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:27:13.797626   60896 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-20220725132539-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:27:13.801581   60896 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220725132539-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:27:13.801589   60896 pod_ready.go:81] duration metric: took 3.958491ms waiting for pod "kube-apiserver-embed-certs-20220725132539-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:27:13.801594   60896 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-20220725132539-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:27:15.814130   60896 pod_ready.go:102] pod "kube-controller-manager-embed-certs-20220725132539-44543" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:18.313101   60896 pod_ready.go:102] pod "kube-controller-manager-embed-certs-20220725132539-44543" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:18.812898   60896 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220725132539-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:27:18.812911   60896 pod_ready.go:81] duration metric: took 5.011165723s waiting for pod "kube-controller-manager-embed-certs-20220725132539-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:27:18.812917   60896 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7pjkq" in "kube-system" namespace to be "Ready" ...
	I0725 13:27:18.816934   60896 pod_ready.go:92] pod "kube-proxy-7pjkq" in "kube-system" namespace has status "Ready":"True"
	I0725 13:27:18.816941   60896 pod_ready.go:81] duration metric: took 4.020031ms waiting for pod "kube-proxy-7pjkq" in "kube-system" namespace to be "Ready" ...
	I0725 13:27:18.816946   60896 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-20220725132539-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:27:18.820860   60896 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220725132539-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:27:18.820867   60896 pod_ready.go:81] duration metric: took 3.91141ms waiting for pod "kube-scheduler-embed-certs-20220725132539-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:27:18.820873   60896 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace to be "Ready" ...
	I0725 13:27:20.830973   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:22.831222   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:25.330338   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:27.331801   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:29.333198   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:31.833556   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:34.331073   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:36.332608   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:38.333354   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:40.832521   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:42.833818   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:45.331089   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:47.334374   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:49.832653   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:51.834727   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:54.334928   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:56.832318   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:00.078228   60183 out.go:204]   - Generating certificates and keys ...
	I0725 13:28:00.141595   60183 out.go:204]   - Booting up control plane ...
	W0725 13:28:00.145489   60183 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0725 13:28:00.145526   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0725 13:28:00.568444   60183 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 13:28:00.578598   60183 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0725 13:28:00.578655   60183 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 13:28:00.586062   60183 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 13:28:00.586084   60183 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0725 13:28:01.303869   60183 out.go:204]   - Generating certificates and keys ...
	I0725 13:27:58.834263   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:01.331255   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:03.332422   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:02.819781   60183 out.go:204]   - Booting up control plane ...
	I0725 13:28:05.334830   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:07.335519   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:09.834439   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:11.835591   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:14.334337   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:16.335031   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:18.835705   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:21.333617   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:23.334016   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:25.833249   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:27.835336   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:30.334081   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:32.335851   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:34.833220   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:36.833578   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:39.336254   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:41.835840   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:44.334110   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:46.833528   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:48.836513   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:50.836788   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:53.336552   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:55.835048   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:57.835550   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:00.336892   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:02.836866   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:05.336690   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:07.835667   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:09.837186   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:12.335226   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:14.335725   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:16.336690   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:18.836768   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:21.337269   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:23.837151   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:26.335027   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:28.338729   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:30.837640   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:33.337517   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:35.835459   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:37.836627   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:40.335900   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:42.337529   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:44.337705   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:46.836842   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:49.337159   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:51.838140   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:54.338382   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:56.837922   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:57.738590   60183 kubeadm.go:397] StartCluster complete in 7m59.250327249s
	I0725 13:29:57.738667   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:29:57.767241   60183 logs.go:274] 0 containers: []
	W0725 13:29:57.767253   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:29:57.767311   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:29:57.795435   60183 logs.go:274] 0 containers: []
	W0725 13:29:57.795448   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:29:57.795503   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:29:57.824559   60183 logs.go:274] 0 containers: []
	W0725 13:29:57.824581   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:29:57.824642   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:29:57.854900   60183 logs.go:274] 0 containers: []
	W0725 13:29:57.854912   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:29:57.854967   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:29:57.883684   60183 logs.go:274] 0 containers: []
	W0725 13:29:57.883695   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:29:57.883747   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:29:57.917022   60183 logs.go:274] 0 containers: []
	W0725 13:29:57.917034   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:29:57.917091   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:29:57.948784   60183 logs.go:274] 0 containers: []
	W0725 13:29:57.948800   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:29:57.948858   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:29:57.982242   60183 logs.go:274] 0 containers: []
	W0725 13:29:57.982254   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:29:57.982261   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:29:57.982268   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:30:00.039435   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.0570932s)
	I0725 13:30:00.039559   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:30:00.039566   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:30:00.078912   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:30:00.078928   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:30:00.090262   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:30:00.090278   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:30:00.144105   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:30:00.144118   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:30:00.144124   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	W0725 13:30:00.158368   60183 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0725 13:30:00.158385   60183 out.go:239] * 
	W0725 13:30:00.158482   60183 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0725 13:30:00.158497   60183 out.go:239] * 
	W0725 13:30:00.159027   60183 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 13:30:00.221624   60183 out.go:177] 
	W0725 13:30:00.264001   60183 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0725 13:30:00.264127   60183 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0725 13:30:00.264205   60183 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0725 13:30:00.327781   60183 out.go:177] 
	I0725 13:29:59.337892   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:01.834615   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:03.837458   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:05.838380   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:08.337530   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:10.837884   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:12.838423   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:14.839069   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:17.338474   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:19.839227   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:22.338405   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:24.339171   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:26.839399   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:29.337976   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:31.839581   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:34.339838   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:36.839475   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:39.337079   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:41.337712   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:43.339506   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:45.837751   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:47.838863   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:49.840092   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:52.338270   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:54.340101   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:56.836913   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:59.339951   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:31:01.839593   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:31:04.338000   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:31:06.340092   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:31:08.840337   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:31:11.338636   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:31:13.339284   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:31:15.340719   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:31:17.341394   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:31:18.834086   60896 pod_ready.go:81] duration metric: took 4m0.006215554s waiting for pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace to be "Ready" ...
	E0725 13:31:18.834110   60896 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace to be "Ready" (will not retry!)
	I0725 13:31:18.834130   60896 pod_ready.go:38] duration metric: took 4m15.057006558s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 13:31:18.834170   60896 kubeadm.go:630] restartCluster took 4m25.097071028s
	W0725 13:31:18.834308   60896 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0725 13:31:18.834336   60896 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0725 13:31:21.172068   60896 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (2.337649089s)
	I0725 13:31:21.172128   60896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 13:31:21.181652   60896 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 13:31:21.189594   60896 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0725 13:31:21.189647   60896 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 13:31:21.197391   60896 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 13:31:21.197416   60896 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0725 13:31:21.478874   60896 out.go:204]   - Generating certificates and keys ...
	I0725 13:31:22.599968   60896 out.go:204]   - Booting up control plane ...
	I0725 13:31:30.133934   60896 out.go:204]   - Configuring RBAC rules ...
	I0725 13:31:30.524739   60896 cni.go:95] Creating CNI manager for ""
	I0725 13:31:30.524751   60896 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 13:31:30.524767   60896 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 13:31:30.524868   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:30.524894   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=a5b59bcfc16aadb787d3d4f0635e06172b98dce6 minikube.k8s.io/name=embed-certs-20220725132539-44543 minikube.k8s.io/updated_at=2022_07_25T13_31_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:30.637807   60896 ops.go:34] apiserver oom_adj: -16
	I0725 13:31:30.637819   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:31.207665   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:31.708629   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:32.207669   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:32.707725   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:33.206619   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:33.708741   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:34.207204   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:34.707441   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:35.208733   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:35.707874   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:36.207589   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:36.707809   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:37.207740   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:37.707370   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:38.207308   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:38.706826   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:39.206881   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:39.708858   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:40.208450   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:40.706969   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:41.206867   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:41.706953   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:42.206938   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:42.706934   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:43.206961   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:43.260786   60896 kubeadm.go:1045] duration metric: took 12.735613106s to wait for elevateKubeSystemPrivileges.
	I0725 13:31:43.260807   60896 kubeadm.go:397] StartCluster complete in 4m49.559741772s
	I0725 13:31:43.260831   60896 settings.go:142] acquiring lock: {Name:mk9b5e011e5806157f9a122f8c65a6cc16a3d9e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 13:31:43.260918   60896 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	I0725 13:31:43.261674   60896 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig: {Name:mk33e723b045d7cbb2c524ed31a7e78a4a0b6415 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 13:31:43.789723   60896 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220725132539-44543" rescaled to 1
	I0725 13:31:43.789759   60896 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 13:31:43.789779   60896 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0725 13:31:43.789789   60896 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0725 13:31:43.789846   60896 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220725132539-44543"
	I0725 13:31:43.789851   60896 addons.go:65] Setting metrics-server=true in profile "embed-certs-20220725132539-44543"
	I0725 13:31:43.832406   60896 addons.go:153] Setting addon metrics-server=true in "embed-certs-20220725132539-44543"
	I0725 13:31:43.789856   60896 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220725132539-44543"
	W0725 13:31:43.832420   60896 addons.go:162] addon metrics-server should already be in state true
	I0725 13:31:43.832423   60896 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220725132539-44543"
	W0725 13:31:43.832430   60896 addons.go:162] addon storage-provisioner should already be in state true
	I0725 13:31:43.832435   60896 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220725132539-44543"
	I0725 13:31:43.789884   60896 addons.go:65] Setting dashboard=true in profile "embed-certs-20220725132539-44543"
	I0725 13:31:43.832465   60896 addons.go:153] Setting addon dashboard=true in "embed-certs-20220725132539-44543"
	I0725 13:31:43.789964   60896 config.go:178] Loaded profile config "embed-certs-20220725132539-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	W0725 13:31:43.832478   60896 addons.go:162] addon dashboard should already be in state true
	I0725 13:31:43.832481   60896 host.go:66] Checking if "embed-certs-20220725132539-44543" exists ...
	I0725 13:31:43.832350   60896 out.go:177] * Verifying Kubernetes components...
	I0725 13:31:43.832523   60896 host.go:66] Checking if "embed-certs-20220725132539-44543" exists ...
	I0725 13:31:43.832514   60896 host.go:66] Checking if "embed-certs-20220725132539-44543" exists ...
	I0725 13:31:43.832730   60896 cli_runner.go:164] Run: docker container inspect embed-certs-20220725132539-44543 --format={{.State.Status}}
	I0725 13:31:43.832863   60896 cli_runner.go:164] Run: docker container inspect embed-certs-20220725132539-44543 --format={{.State.Status}}
	I0725 13:31:43.853502   60896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 13:31:43.857670   60896 cli_runner.go:164] Run: docker container inspect embed-certs-20220725132539-44543 --format={{.State.Status}}
	I0725 13:31:43.857757   60896 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0725 13:31:43.857763   60896 cli_runner.go:164] Run: docker container inspect embed-certs-20220725132539-44543 --format={{.State.Status}}
	I0725 13:31:43.886652   60896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220725132539-44543
	I0725 13:31:43.981533   60896 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220725132539-44543"
	I0725 13:31:43.995633   60896 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0725 13:31:44.032544   60896 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0725 13:31:44.032555   60896 addons.go:162] addon default-storageclass should already be in state true
	I0725 13:31:44.032587   60896 host.go:66] Checking if "embed-certs-20220725132539-44543" exists ...
	I0725 13:31:44.128468   60896 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0725 13:31:44.069777   60896 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0725 13:31:44.088256   60896 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220725132539-44543" to be "Ready" ...
	I0725 13:31:44.107700   60896 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 13:31:44.129018   60896 cli_runner.go:164] Run: docker container inspect embed-certs-20220725132539-44543 --format={{.State.Status}}
	I0725 13:31:44.149686   60896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 13:31:44.149713   60896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0725 13:31:44.171575   60896 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0725 13:31:44.149906   60896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725132539-44543
	I0725 13:31:44.149958   60896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725132539-44543
	I0725 13:31:44.192570   60896 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0725 13:31:44.192596   60896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0725 13:31:44.192681   60896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725132539-44543
	I0725 13:31:44.196642   60896 node_ready.go:49] node "embed-certs-20220725132539-44543" has status "Ready":"True"
	I0725 13:31:44.196673   60896 node_ready.go:38] duration metric: took 47.039384ms waiting for node "embed-certs-20220725132539-44543" to be "Ready" ...
	I0725 13:31:44.196683   60896 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 13:31:44.205428   60896 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-4zn7p" in "kube-system" namespace to be "Ready" ...
	I0725 13:31:44.251652   60896 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 13:31:44.251671   60896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 13:31:44.251763   60896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725132539-44543
	I0725 13:31:44.278968   60896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59422 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/embed-certs-20220725132539-44543/id_rsa Username:docker}
	I0725 13:31:44.282079   60896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59422 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/embed-certs-20220725132539-44543/id_rsa Username:docker}
	I0725 13:31:44.297541   60896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59422 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/embed-certs-20220725132539-44543/id_rsa Username:docker}
	I0725 13:31:44.341814   60896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59422 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/embed-certs-20220725132539-44543/id_rsa Username:docker}
	I0725 13:31:44.480595   60896 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0725 13:31:44.480614   60896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0725 13:31:44.484668   60896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 13:31:44.492992   60896 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0725 13:31:44.493008   60896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0725 13:31:44.502662   60896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 13:31:44.505187   60896 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0725 13:31:44.505200   60896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0725 13:31:44.593413   60896 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0725 13:31:44.593434   60896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0725 13:31:44.680648   60896 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 13:31:44.680675   60896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0725 13:31:44.692890   60896 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0725 13:31:44.692904   60896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0725 13:31:44.789191   60896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 13:31:44.804845   60896 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0725 13:31:44.804859   60896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0725 13:31:44.885209   60896 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0725 13:31:44.885223   60896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0725 13:31:44.904020   60896 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0725 13:31:44.904035   60896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0725 13:31:44.998800   60896 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0725 13:31:44.998819   60896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0725 13:31:45.094281   60896 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0725 13:31:45.094297   60896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0725 13:31:45.114673   60896 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0725 13:31:45.114691   60896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0725 13:31:45.202664   60896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0725 13:31:45.320349   60896 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.462520601s)
	I0725 13:31:45.320371   60896 start.go:809] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0725 13:31:45.486719   60896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.001991871s)
	I0725 13:31:45.628322   60896 addons.go:383] Verifying addon metrics-server=true in "embed-certs-20220725132539-44543"
	I0725 13:31:45.926821   60896 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0725 13:31:45.962589   60896 addons.go:414] enableAddons completed in 2.17271439s
	I0725 13:31:46.222098   60896 pod_ready.go:102] pod "coredns-6d4b75cb6d-4zn7p" in "kube-system" namespace has status "Ready":"False"
	I0725 13:31:48.726349   60896 pod_ready.go:102] pod "coredns-6d4b75cb6d-4zn7p" in "kube-system" namespace has status "Ready":"False"
	I0725 13:31:49.231974   60896 pod_ready.go:92] pod "coredns-6d4b75cb6d-4zn7p" in "kube-system" namespace has status "Ready":"True"
	I0725 13:31:49.231992   60896 pod_ready.go:81] duration metric: took 5.026392872s waiting for pod "coredns-6d4b75cb6d-4zn7p" in "kube-system" namespace to be "Ready" ...
	I0725 13:31:49.232000   60896 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-m6mqs" in "kube-system" namespace to be "Ready" ...
	I0725 13:31:49.241054   60896 pod_ready.go:92] pod "coredns-6d4b75cb6d-m6mqs" in "kube-system" namespace has status "Ready":"True"
	I0725 13:31:49.241066   60896 pod_ready.go:81] duration metric: took 9.060472ms waiting for pod "coredns-6d4b75cb6d-m6mqs" in "kube-system" namespace to be "Ready" ...
	I0725 13:31:49.241072   60896 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-20220725132539-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:31:49.278239   60896 pod_ready.go:92] pod "etcd-embed-certs-20220725132539-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:31:49.278258   60896 pod_ready.go:81] duration metric: took 37.176858ms waiting for pod "etcd-embed-certs-20220725132539-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:31:49.278274   60896 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-20220725132539-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:31:49.288381   60896 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220725132539-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:31:49.288396   60896 pod_ready.go:81] duration metric: took 10.109871ms waiting for pod "kube-apiserver-embed-certs-20220725132539-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:31:49.288407   60896 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-20220725132539-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:31:49.296826   60896 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220725132539-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:31:49.296836   60896 pod_ready.go:81] duration metric: took 8.420236ms waiting for pod "kube-controller-manager-embed-certs-20220725132539-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:31:49.296843   60896 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qvlv7" in "kube-system" namespace to be "Ready" ...
	I0725 13:31:49.620355   60896 pod_ready.go:92] pod "kube-proxy-qvlv7" in "kube-system" namespace has status "Ready":"True"
	I0725 13:31:49.620366   60896 pod_ready.go:81] duration metric: took 323.508913ms waiting for pod "kube-proxy-qvlv7" in "kube-system" namespace to be "Ready" ...
	I0725 13:31:49.620373   60896 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-20220725132539-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:31:50.021783   60896 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220725132539-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:31:50.021794   60896 pod_ready.go:81] duration metric: took 401.404849ms waiting for pod "kube-scheduler-embed-certs-20220725132539-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:31:50.021801   60896 pod_ready.go:38] duration metric: took 5.824934979s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 13:31:50.021817   60896 api_server.go:51] waiting for apiserver process to appear ...
	I0725 13:31:50.021876   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:31:50.033076   60896 api_server.go:71] duration metric: took 6.243114213s to wait for apiserver process to appear ...
	I0725 13:31:50.033091   60896 api_server.go:87] waiting for apiserver healthz status ...
	I0725 13:31:50.033098   60896 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59426/healthz ...
	I0725 13:31:50.038511   60896 api_server.go:266] https://127.0.0.1:59426/healthz returned 200:
	ok
	I0725 13:31:50.039598   60896 api_server.go:140] control plane version: v1.24.2
	I0725 13:31:50.039608   60896 api_server.go:130] duration metric: took 6.512407ms to wait for apiserver health ...
	I0725 13:31:50.039612   60896 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 13:31:50.223624   60896 system_pods.go:59] 9 kube-system pods found
	I0725 13:31:50.223637   60896 system_pods.go:61] "coredns-6d4b75cb6d-4zn7p" [55f1e344-5f0b-4e26-a0da-3e1d79f6f1d7] Running
	I0725 13:31:50.223641   60896 system_pods.go:61] "coredns-6d4b75cb6d-m6mqs" [6287a319-fd89-45e2-a0aa-615a41b7ba03] Running
	I0725 13:31:50.223644   60896 system_pods.go:61] "etcd-embed-certs-20220725132539-44543" [8379e39b-c68c-4c21-ac53-1d3ad380104c] Running
	I0725 13:31:50.223648   60896 system_pods.go:61] "kube-apiserver-embed-certs-20220725132539-44543" [65bb168c-bc95-456d-8ab4-df3d903c5c84] Running
	I0725 13:31:50.223651   60896 system_pods.go:61] "kube-controller-manager-embed-certs-20220725132539-44543" [1e433951-6dfb-4f38-9cc7-008f5e50554f] Running
	I0725 13:31:50.223654   60896 system_pods.go:61] "kube-proxy-qvlv7" [92133973-6c32-49a0-910f-93bfa25bcdd1] Running
	I0725 13:31:50.223659   60896 system_pods.go:61] "kube-scheduler-embed-certs-20220725132539-44543" [cd2956ef-a5fc-49ce-ba4f-f53046e750f2] Running
	I0725 13:31:50.223664   60896 system_pods.go:61] "metrics-server-5c6f97fb75-5w696" [5c519070-80ae-4ef5-b8f4-5927ba8fc676] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 13:31:50.223669   60896 system_pods.go:61] "storage-provisioner" [d437bb18-cd5f-4ddf-ab39-8b4ab7e477ca] Running
	I0725 13:31:50.223673   60896 system_pods.go:74] duration metric: took 184.051632ms to wait for pod list to return data ...
	I0725 13:31:50.223678   60896 default_sa.go:34] waiting for default service account to be created ...
	I0725 13:31:50.421586   60896 default_sa.go:45] found service account: "default"
	I0725 13:31:50.421598   60896 default_sa.go:55] duration metric: took 197.910754ms for default service account to be created ...
	I0725 13:31:50.421604   60896 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 13:31:50.623670   60896 system_pods.go:86] 9 kube-system pods found
	I0725 13:31:50.623684   60896 system_pods.go:89] "coredns-6d4b75cb6d-4zn7p" [55f1e344-5f0b-4e26-a0da-3e1d79f6f1d7] Running
	I0725 13:31:50.623691   60896 system_pods.go:89] "coredns-6d4b75cb6d-m6mqs" [6287a319-fd89-45e2-a0aa-615a41b7ba03] Running
	I0725 13:31:50.623695   60896 system_pods.go:89] "etcd-embed-certs-20220725132539-44543" [8379e39b-c68c-4c21-ac53-1d3ad380104c] Running
	I0725 13:31:50.623698   60896 system_pods.go:89] "kube-apiserver-embed-certs-20220725132539-44543" [65bb168c-bc95-456d-8ab4-df3d903c5c84] Running
	I0725 13:31:50.623702   60896 system_pods.go:89] "kube-controller-manager-embed-certs-20220725132539-44543" [1e433951-6dfb-4f38-9cc7-008f5e50554f] Running
	I0725 13:31:50.623706   60896 system_pods.go:89] "kube-proxy-qvlv7" [92133973-6c32-49a0-910f-93bfa25bcdd1] Running
	I0725 13:31:50.623709   60896 system_pods.go:89] "kube-scheduler-embed-certs-20220725132539-44543" [cd2956ef-a5fc-49ce-ba4f-f53046e750f2] Running
	I0725 13:31:50.623714   60896 system_pods.go:89] "metrics-server-5c6f97fb75-5w696" [5c519070-80ae-4ef5-b8f4-5927ba8fc676] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 13:31:50.623718   60896 system_pods.go:89] "storage-provisioner" [d437bb18-cd5f-4ddf-ab39-8b4ab7e477ca] Running
	I0725 13:31:50.623722   60896 system_pods.go:126] duration metric: took 202.109351ms to wait for k8s-apps to be running ...
	I0725 13:31:50.623730   60896 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 13:31:50.623784   60896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 13:31:50.635149   60896 system_svc.go:56] duration metric: took 11.408107ms WaitForService to wait for kubelet.
	I0725 13:31:50.635164   60896 kubeadm.go:572] duration metric: took 6.845188044s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0725 13:31:50.635187   60896 node_conditions.go:102] verifying NodePressure condition ...
	I0725 13:31:50.820572   60896 node_conditions.go:122] node storage ephemeral capacity is 107077304Ki
	I0725 13:31:50.820586   60896 node_conditions.go:123] node cpu capacity is 6
	I0725 13:31:50.820593   60896 node_conditions.go:105] duration metric: took 185.39801ms to run NodePressure ...
	I0725 13:31:50.820602   60896 start.go:216] waiting for startup goroutines ...
	I0725 13:31:50.852176   60896 start.go:506] kubectl: 1.24.1, cluster: 1.24.2 (minor skew: 0)
	I0725 13:31:50.873434   60896 out.go:177] * Done! kubectl is now configured to use "embed-certs-20220725132539-44543" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Mon 2022-07-25 20:26:49 UTC, end at Mon 2022-07-25 20:32:47 UTC. --
	Jul 25 20:31:20 embed-certs-20220725132539-44543 dockerd[496]: time="2022-07-25T20:31:20.081603761Z" level=info msg="ignoring event" container=85d790c640c8b47f6f4c8a3cb9344863a3914dab753e040708dd10ff92094c55 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:31:20 embed-certs-20220725132539-44543 dockerd[496]: time="2022-07-25T20:31:20.149662928Z" level=info msg="ignoring event" container=b20ca50ff161291b66df64a5a880cda73de4f70b2ffa265cecb3ca9194fbb431 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:31:20 embed-certs-20220725132539-44543 dockerd[496]: time="2022-07-25T20:31:20.215549330Z" level=info msg="ignoring event" container=198e8fd6b46af5e6c62d661da64f3105ebc27c601ce3f19c68a4bc9d911fa0df module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:31:20 embed-certs-20220725132539-44543 dockerd[496]: time="2022-07-25T20:31:20.288791631Z" level=info msg="ignoring event" container=5c7c81ec6d0d57b43c9cd9a344976fdb7757ec75a8f563a44df3bfb48ecfb827 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:31:20 embed-certs-20220725132539-44543 dockerd[496]: time="2022-07-25T20:31:20.370771919Z" level=info msg="ignoring event" container=71966907b491834781c58b89b9fd8711b6f7a3db447a18c88b2966cf3bcbd9e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:31:20 embed-certs-20220725132539-44543 dockerd[496]: time="2022-07-25T20:31:20.440057133Z" level=info msg="ignoring event" container=0aec9c7d4248eec0e9c77d3c59225d5cb857181c4957ac863c59dbbb21457f69 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:31:20 embed-certs-20220725132539-44543 dockerd[496]: time="2022-07-25T20:31:20.553895228Z" level=info msg="ignoring event" container=c5c9dd312fa561edca255ed23a4b89e20db3e624ca52855b2c361747bdceca46 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:31:20 embed-certs-20220725132539-44543 dockerd[496]: time="2022-07-25T20:31:20.618129752Z" level=info msg="ignoring event" container=375215e177b2c12fb6a0e338fae989f55e17cfd0a56faf75e822bc4480f743d0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:31:20 embed-certs-20220725132539-44543 dockerd[496]: time="2022-07-25T20:31:20.697561986Z" level=info msg="ignoring event" container=253f0f1685d75fb088997970c54a3c821aabde506a3395c8bd0a42f820354d13 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:31:20 embed-certs-20220725132539-44543 dockerd[496]: time="2022-07-25T20:31:20.759162740Z" level=info msg="ignoring event" container=fb77eda7acd0f461d040c83dad2fa08dd76cada6c81a2910aac86fcd80b4cf06 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:31:20 embed-certs-20220725132539-44543 dockerd[496]: time="2022-07-25T20:31:20.848188863Z" level=info msg="ignoring event" container=119bc29f357405ad917f56b17b03815d613e36de424206b29b1623c884fd20bf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:31:46 embed-certs-20220725132539-44543 dockerd[496]: time="2022-07-25T20:31:46.761240108Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 20:31:46 embed-certs-20220725132539-44543 dockerd[496]: time="2022-07-25T20:31:46.761288557Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 20:31:46 embed-certs-20220725132539-44543 dockerd[496]: time="2022-07-25T20:31:46.762299809Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 20:31:47 embed-certs-20220725132539-44543 dockerd[496]: time="2022-07-25T20:31:47.858622031Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Jul 25 20:31:48 embed-certs-20220725132539-44543 dockerd[496]: time="2022-07-25T20:31:48.157834691Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Jul 25 20:31:51 embed-certs-20220725132539-44543 dockerd[496]: time="2022-07-25T20:31:51.050364498Z" level=info msg="ignoring event" container=dcefbd6025133341169fffe76554d1c404eb2da7044d81ce76efb865b7173688 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:31:51 embed-certs-20220725132539-44543 dockerd[496]: time="2022-07-25T20:31:51.253539985Z" level=info msg="ignoring event" container=be78190d2ffa8146fb25278f510eb579b84f69e76e4f04f6cab429921baf5c3e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:31:51 embed-certs-20220725132539-44543 dockerd[496]: time="2022-07-25T20:31:51.635282144Z" level=info msg="ignoring event" container=da4096f15772b88569be3f2f14b83de85b56da7ce549a12c481310415644cafb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:31:51 embed-certs-20220725132539-44543 dockerd[496]: time="2022-07-25T20:31:51.801914213Z" level=warning msg="reference for unknown type: " digest="sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3" remote="docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3"
	Jul 25 20:31:52 embed-certs-20220725132539-44543 dockerd[496]: time="2022-07-25T20:31:52.479879998Z" level=info msg="ignoring event" container=8c59a7a81bd93e9536ad9c24f177947698b86bbb5dbe857ad4c0883824eab73c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:32:01 embed-certs-20220725132539-44543 dockerd[496]: time="2022-07-25T20:32:01.558119564Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 20:32:01 embed-certs-20220725132539-44543 dockerd[496]: time="2022-07-25T20:32:01.558163132Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 20:32:01 embed-certs-20220725132539-44543 dockerd[496]: time="2022-07-25T20:32:01.559195801Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 20:32:10 embed-certs-20220725132539-44543 dockerd[496]: time="2022-07-25T20:32:10.679853127Z" level=info msg="ignoring event" container=74f9b364f42ab5fb7e8406c542d7d5ab232fdf3b1d2a38c262579ac2087de8a9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	74f9b364f42ab       a90209bb39e3d                                                                                    37 seconds ago       Exited              dashboard-metrics-scraper   2                   3b509a454f260
	5d443504c77d9       kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3   50 seconds ago       Running             kubernetes-dashboard        0                   de5934f2a6b43
	57ad4b4dda5bb       6e38f40d628db                                                                                    About a minute ago   Running             storage-provisioner         0                   58a01c9a94306
	6f8b5d6ba405f       a4ca41631cc7a                                                                                    About a minute ago   Running             coredns                     0                   466dfdc8aa749
	79d93d68c7bc3       a634548d10b03                                                                                    About a minute ago   Running             kube-proxy                  0                   f217baf9b0cd2
	09b5fd9f7b622       5d725196c1f47                                                                                    About a minute ago   Running             kube-scheduler              0                   d4d8e213f4f74
	12e962a6ce008       d3377ffb7177c                                                                                    About a minute ago   Running             kube-apiserver              0                   2135c184c938a
	40e9cee209077       34cdf99b1bb3b                                                                                    About a minute ago   Running             kube-controller-manager     0                   54a4c998cf04f
	807575ee1a9a5       aebe758cef4cd                                                                                    About a minute ago   Running             etcd                        0                   5efc03e311742
	
	* 
	* ==> coredns [6f8b5d6ba405] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20220725132539-44543
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20220725132539-44543
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a5b59bcfc16aadb787d3d4f0635e06172b98dce6
	                    minikube.k8s.io/name=embed-certs-20220725132539-44543
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_07_25T13_31_30_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 25 Jul 2022 20:31:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20220725132539-44543
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 25 Jul 2022 20:32:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 25 Jul 2022 20:32:44 +0000   Mon, 25 Jul 2022 20:31:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 25 Jul 2022 20:32:44 +0000   Mon, 25 Jul 2022 20:31:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 25 Jul 2022 20:32:44 +0000   Mon, 25 Jul 2022 20:31:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 25 Jul 2022 20:32:44 +0000   Mon, 25 Jul 2022 20:31:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-20220725132539-44543
	Capacity:
	  cpu:                6
	  ephemeral-storage:  107077304Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  107077304Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 855c6c72c86b4657b3d8c3c774fd7e1d
	  System UUID:                215a12c7-a2b8-46f8-b13f-1d5e49241600
	  Boot ID:                    f0b8333d-7eba-4eec-b9ed-6046a2426c0d
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.2
	  Kube-Proxy Version:         v1.24.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-4zn7p                                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     64s
	  kube-system                 etcd-embed-certs-20220725132539-44543                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         77s
	  kube-system                 kube-apiserver-embed-certs-20220725132539-44543             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 kube-controller-manager-embed-certs-20220725132539-44543    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	  kube-system                 kube-proxy-qvlv7                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	  kube-system                 kube-scheduler-embed-certs-20220725132539-44543             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 metrics-server-5c6f97fb75-5w696                             100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         62s
	  kube-system                 storage-provisioner                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kubernetes-dashboard        dashboard-metrics-scraper-dffd48c4c-hpdct                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kubernetes-dashboard        kubernetes-dashboard-5fd5574d9f-mp7g8                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 63s                kube-proxy       
	  Normal  Starting                 84s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  84s (x3 over 84s)  kubelet          Node embed-certs-20220725132539-44543 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    84s (x3 over 84s)  kubelet          Node embed-certs-20220725132539-44543 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     84s (x3 over 84s)  kubelet          Node embed-certs-20220725132539-44543 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  84s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 77s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  77s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  77s                kubelet          Node embed-certs-20220725132539-44543 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    77s                kubelet          Node embed-certs-20220725132539-44543 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     77s                kubelet          Node embed-certs-20220725132539-44543 status is now: NodeHasSufficientPID
	  Normal  NodeReady                67s                kubelet          Node embed-certs-20220725132539-44543 status is now: NodeReady
	  Normal  RegisteredNode           65s                node-controller  Node embed-certs-20220725132539-44543 event: Registered Node embed-certs-20220725132539-44543 in Controller
	  Normal  Starting                 3s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3s                 kubelet          Node embed-certs-20220725132539-44543 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s                 kubelet          Node embed-certs-20220725132539-44543 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s                 kubelet          Node embed-certs-20220725132539-44543 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3s                 kubelet          Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [807575ee1a9a] <==
	* {"level":"info","ts":"2022-07-25T20:31:24.460Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2022-07-25T20:31:24.461Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2022-07-25T20:31:24.461Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-07-25T20:31:24.461Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-07-25T20:31:24.461Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-07-25T20:31:24.461Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-25T20:31:24.461Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-25T20:31:25.254Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2022-07-25T20:31:25.255Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-07-25T20:31:25.255Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2022-07-25T20:31:25.255Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2022-07-25T20:31:25.255Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-07-25T20:31:25.255Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2022-07-25T20:31:25.255Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-07-25T20:31:25.255Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T20:31:25.255Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:embed-certs-20220725132539-44543 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-25T20:31:25.255Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-25T20:31:25.255Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-25T20:31:25.256Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T20:31:25.256Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T20:31:25.256Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T20:31:25.257Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2022-07-25T20:31:25.257Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-07-25T20:31:25.261Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-25T20:31:25.261Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  20:32:47 up  1:14,  0 users,  load average: 0.42, 0.79, 1.10
	Linux embed-certs-20220725132539-44543 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [12e962a6ce00] <==
	* I0725 20:31:29.415603       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0725 20:31:30.372554       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0725 20:31:30.377894       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0725 20:31:30.387359       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0725 20:31:30.409950       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0725 20:31:42.822982       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0725 20:31:43.020795       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0725 20:31:44.003401       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0725 20:31:45.633303       1 alloc.go:327] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.97.51.201]
	I0725 20:31:45.898413       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.100.183.11]
	I0725 20:31:45.910007       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.97.221.100]
	W0725 20:31:46.539642       1 handler_proxy.go:102] no RequestInfo found in the context
	W0725 20:31:46.539739       1 handler_proxy.go:102] no RequestInfo found in the context
	E0725 20:31:46.539771       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	E0725 20:31:46.539799       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0725 20:31:46.539814       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0725 20:31:46.541260       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0725 20:32:46.499328       1 handler_proxy.go:102] no RequestInfo found in the context
	E0725 20:32:46.499398       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0725 20:32:46.499408       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0725 20:32:46.500569       1 handler_proxy.go:102] no RequestInfo found in the context
	E0725 20:32:46.500678       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0725 20:32:46.500716       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [40e9cee20907] <==
	* I0725 20:31:43.024743       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-qvlv7"
	I0725 20:31:43.274696       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-m6mqs"
	I0725 20:31:43.279391       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-4zn7p"
	I0725 20:31:43.302571       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-6d4b75cb6d to 1"
	E0725 20:31:43.305851       1 replica_set.go:550] sync "kube-system/coredns-6d4b75cb6d" failed with Operation cannot be fulfilled on replicasets.apps "coredns-6d4b75cb6d": the object has been modified; please apply your changes to the latest version and try again
	I0725 20:31:43.308950       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-6d4b75cb6d-m6mqs"
	I0725 20:31:45.523782       1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-5c6f97fb75 to 1"
	I0725 20:31:45.528140       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-5c6f97fb75-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0725 20:31:45.598549       1 replica_set.go:550] sync "kube-system/metrics-server-5c6f97fb75" failed with pods "metrics-server-5c6f97fb75-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0725 20:31:45.603442       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c6f97fb75-5w696"
	I0725 20:31:45.717349       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-dffd48c4c to 1"
	I0725 20:31:45.725413       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-5fd5574d9f to 1"
	I0725 20:31:45.727454       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0725 20:31:45.732590       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0725 20:31:45.734887       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0725 20:31:45.736020       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0725 20:31:45.736080       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0725 20:31:45.739794       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0725 20:31:45.742577       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0725 20:31:45.742739       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0725 20:31:45.798566       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-dffd48c4c-hpdct"
	I0725 20:31:45.802727       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5fd5574d9f-mp7g8"
	W0725 20:31:52.364949       1 endpointslice_controller.go:302] Error syncing endpoint slices for service "kube-system/kube-dns", retrying. Error: EndpointSlice informer cache is out of date
	E0725 20:32:44.392876       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0725 20:32:44.455047       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [79d93d68c7bc] <==
	* I0725 20:31:43.909226       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0725 20:31:43.909368       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0725 20:31:43.909403       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0725 20:31:44.000185       1 server_others.go:206] "Using iptables Proxier"
	I0725 20:31:44.000284       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0725 20:31:44.000293       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0725 20:31:44.000303       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0725 20:31:44.000351       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0725 20:31:44.000529       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0725 20:31:44.000777       1 server.go:661] "Version info" version="v1.24.2"
	I0725 20:31:44.000784       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 20:31:44.001122       1 config.go:317] "Starting service config controller"
	I0725 20:31:44.001135       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0725 20:31:44.001231       1 config.go:226] "Starting endpoint slice config controller"
	I0725 20:31:44.001237       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0725 20:31:44.001411       1 config.go:444] "Starting node config controller"
	I0725 20:31:44.001423       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0725 20:31:44.101851       1 shared_informer.go:262] Caches are synced for node config
	I0725 20:31:44.101892       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0725 20:31:44.101899       1 shared_informer.go:262] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [09b5fd9f7b62] <==
	* W0725 20:31:27.330715       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0725 20:31:27.330748       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0725 20:31:28.165785       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0725 20:31:28.165840       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0725 20:31:28.165785       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0725 20:31:28.165886       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0725 20:31:28.203972       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0725 20:31:28.204009       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0725 20:31:28.245452       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0725 20:31:28.245501       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0725 20:31:28.247590       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0725 20:31:28.247634       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0725 20:31:28.321716       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0725 20:31:28.321767       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0725 20:31:28.357626       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0725 20:31:28.357662       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0725 20:31:28.399651       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0725 20:31:28.399691       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0725 20:31:28.442074       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0725 20:31:28.442089       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0725 20:31:28.454053       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0725 20:31:28.454089       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0725 20:31:28.462184       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0725 20:31:28.462258       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0725 20:31:30.726755       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2022-07-25 20:26:49 UTC, end at Mon 2022-07-25 20:32:48 UTC. --
	Jul 25 20:32:45 embed-certs-20220725132539-44543 kubelet[9870]: I0725 20:32:45.879321    9870 topology_manager.go:200] "Topology Admit Handler"
	Jul 25 20:32:45 embed-certs-20220725132539-44543 kubelet[9870]: I0725 20:32:45.879399    9870 topology_manager.go:200] "Topology Admit Handler"
	Jul 25 20:32:45 embed-certs-20220725132539-44543 kubelet[9870]: I0725 20:32:45.879428    9870 topology_manager.go:200] "Topology Admit Handler"
	Jul 25 20:32:45 embed-certs-20220725132539-44543 kubelet[9870]: I0725 20:32:45.879452    9870 topology_manager.go:200] "Topology Admit Handler"
	Jul 25 20:32:45 embed-certs-20220725132539-44543 kubelet[9870]: I0725 20:32:45.879479    9870 topology_manager.go:200] "Topology Admit Handler"
	Jul 25 20:32:45 embed-certs-20220725132539-44543 kubelet[9870]: I0725 20:32:45.897132    9870 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5qmj\" (UniqueName: \"kubernetes.io/projected/772f2347-bdb3-4e73-ad0c-92bdc89a2ef2-kube-api-access-v5qmj\") pod \"kubernetes-dashboard-5fd5574d9f-mp7g8\" (UID: \"772f2347-bdb3-4e73-ad0c-92bdc89a2ef2\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-mp7g8"
	Jul 25 20:32:45 embed-certs-20220725132539-44543 kubelet[9870]: I0725 20:32:45.897162    9870 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/074796ba-5e47-424d-abea-7c53d2a8083d-tmp-volume\") pod \"dashboard-metrics-scraper-dffd48c4c-hpdct\" (UID: \"074796ba-5e47-424d-abea-7c53d2a8083d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-hpdct"
	Jul 25 20:32:45 embed-certs-20220725132539-44543 kubelet[9870]: I0725 20:32:45.897181    9870 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92133973-6c32-49a0-910f-93bfa25bcdd1-xtables-lock\") pod \"kube-proxy-qvlv7\" (UID: \"92133973-6c32-49a0-910f-93bfa25bcdd1\") " pod="kube-system/kube-proxy-qvlv7"
	Jul 25 20:32:45 embed-certs-20220725132539-44543 kubelet[9870]: I0725 20:32:45.897219    9870 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvxl9\" (UniqueName: \"kubernetes.io/projected/55f1e344-5f0b-4e26-a0da-3e1d79f6f1d7-kube-api-access-gvxl9\") pod \"coredns-6d4b75cb6d-4zn7p\" (UID: \"55f1e344-5f0b-4e26-a0da-3e1d79f6f1d7\") " pod="kube-system/coredns-6d4b75cb6d-4zn7p"
	Jul 25 20:32:45 embed-certs-20220725132539-44543 kubelet[9870]: I0725 20:32:45.897285    9870 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qj7s\" (UniqueName: \"kubernetes.io/projected/5c519070-80ae-4ef5-b8f4-5927ba8fc676-kube-api-access-4qj7s\") pod \"metrics-server-5c6f97fb75-5w696\" (UID: \"5c519070-80ae-4ef5-b8f4-5927ba8fc676\") " pod="kube-system/metrics-server-5c6f97fb75-5w696"
	Jul 25 20:32:45 embed-certs-20220725132539-44543 kubelet[9870]: I0725 20:32:45.897338    9870 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92133973-6c32-49a0-910f-93bfa25bcdd1-lib-modules\") pod \"kube-proxy-qvlv7\" (UID: \"92133973-6c32-49a0-910f-93bfa25bcdd1\") " pod="kube-system/kube-proxy-qvlv7"
	Jul 25 20:32:45 embed-certs-20220725132539-44543 kubelet[9870]: I0725 20:32:45.897385    9870 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kspd6\" (UniqueName: \"kubernetes.io/projected/074796ba-5e47-424d-abea-7c53d2a8083d-kube-api-access-kspd6\") pod \"dashboard-metrics-scraper-dffd48c4c-hpdct\" (UID: \"074796ba-5e47-424d-abea-7c53d2a8083d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-hpdct"
	Jul 25 20:32:45 embed-certs-20220725132539-44543 kubelet[9870]: I0725 20:32:45.897403    9870 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/772f2347-bdb3-4e73-ad0c-92bdc89a2ef2-tmp-volume\") pod \"kubernetes-dashboard-5fd5574d9f-mp7g8\" (UID: \"772f2347-bdb3-4e73-ad0c-92bdc89a2ef2\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-mp7g8"
	Jul 25 20:32:45 embed-certs-20220725132539-44543 kubelet[9870]: I0725 20:32:45.897416    9870 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d437bb18-cd5f-4ddf-ab39-8b4ab7e477ca-tmp\") pod \"storage-provisioner\" (UID: \"d437bb18-cd5f-4ddf-ab39-8b4ab7e477ca\") " pod="kube-system/storage-provisioner"
	Jul 25 20:32:45 embed-certs-20220725132539-44543 kubelet[9870]: I0725 20:32:45.897431    9870 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5c519070-80ae-4ef5-b8f4-5927ba8fc676-tmp-dir\") pod \"metrics-server-5c6f97fb75-5w696\" (UID: \"5c519070-80ae-4ef5-b8f4-5927ba8fc676\") " pod="kube-system/metrics-server-5c6f97fb75-5w696"
	Jul 25 20:32:45 embed-certs-20220725132539-44543 kubelet[9870]: I0725 20:32:45.897446    9870 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v9f4\" (UniqueName: \"kubernetes.io/projected/92133973-6c32-49a0-910f-93bfa25bcdd1-kube-api-access-8v9f4\") pod \"kube-proxy-qvlv7\" (UID: \"92133973-6c32-49a0-910f-93bfa25bcdd1\") " pod="kube-system/kube-proxy-qvlv7"
	Jul 25 20:32:45 embed-certs-20220725132539-44543 kubelet[9870]: I0725 20:32:45.897474    9870 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/55f1e344-5f0b-4e26-a0da-3e1d79f6f1d7-config-volume\") pod \"coredns-6d4b75cb6d-4zn7p\" (UID: \"55f1e344-5f0b-4e26-a0da-3e1d79f6f1d7\") " pod="kube-system/coredns-6d4b75cb6d-4zn7p"
	Jul 25 20:32:45 embed-certs-20220725132539-44543 kubelet[9870]: I0725 20:32:45.897506    9870 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz58m\" (UniqueName: \"kubernetes.io/projected/d437bb18-cd5f-4ddf-ab39-8b4ab7e477ca-kube-api-access-rz58m\") pod \"storage-provisioner\" (UID: \"d437bb18-cd5f-4ddf-ab39-8b4ab7e477ca\") " pod="kube-system/storage-provisioner"
	Jul 25 20:32:45 embed-certs-20220725132539-44543 kubelet[9870]: I0725 20:32:45.897534    9870 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/92133973-6c32-49a0-910f-93bfa25bcdd1-kube-proxy\") pod \"kube-proxy-qvlv7\" (UID: \"92133973-6c32-49a0-910f-93bfa25bcdd1\") " pod="kube-system/kube-proxy-qvlv7"
	Jul 25 20:32:45 embed-certs-20220725132539-44543 kubelet[9870]: I0725 20:32:45.897550    9870 reconciler.go:157] "Reconciler: start to sync state"
	Jul 25 20:32:47 embed-certs-20220725132539-44543 kubelet[9870]: I0725 20:32:47.075832    9870 request.go:601] Waited for 1.176656075s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Jul 25 20:32:47 embed-certs-20220725132539-44543 kubelet[9870]: E0725 20:32:47.082543    9870 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-embed-certs-20220725132539-44543\" already exists" pod="kube-system/kube-scheduler-embed-certs-20220725132539-44543"
	Jul 25 20:32:47 embed-certs-20220725132539-44543 kubelet[9870]: E0725 20:32:47.280079    9870 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-embed-certs-20220725132539-44543\" already exists" pod="kube-system/kube-apiserver-embed-certs-20220725132539-44543"
	Jul 25 20:32:47 embed-certs-20220725132539-44543 kubelet[9870]: E0725 20:32:47.486257    9870 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-embed-certs-20220725132539-44543\" already exists" pod="kube-system/etcd-embed-certs-20220725132539-44543"
	Jul 25 20:32:47 embed-certs-20220725132539-44543 kubelet[9870]: E0725 20:32:47.734289    9870 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-embed-certs-20220725132539-44543\" already exists" pod="kube-system/kube-controller-manager-embed-certs-20220725132539-44543"
	
	* 
	* ==> kubernetes-dashboard [5d443504c77d] <==
	* 2022/07/25 20:31:57 Using namespace: kubernetes-dashboard
	2022/07/25 20:31:57 Using in-cluster config to connect to apiserver
	2022/07/25 20:31:57 Using secret token for csrf signing
	2022/07/25 20:31:57 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/07/25 20:31:57 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/07/25 20:31:57 Successful initial request to the apiserver, version: v1.24.2
	2022/07/25 20:31:57 Generating JWE encryption key
	2022/07/25 20:31:57 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/07/25 20:31:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/07/25 20:31:57 Initializing JWE encryption key from synchronized object
	2022/07/25 20:31:57 Creating in-cluster Sidecar client
	2022/07/25 20:31:57 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/07/25 20:31:57 Serving insecurely on HTTP port: 9090
	2022/07/25 20:32:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/07/25 20:31:57 Starting overwatch
	
	* 
	* ==> storage-provisioner [57ad4b4dda5b] <==
	* I0725 20:31:46.060841       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0725 20:31:46.069555       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0725 20:31:46.069604       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0725 20:31:46.074987       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0725 20:31:46.075111       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-20220725132539-44543_c95feabb-f3f4-4425-9b90-9a6be90a8ff0!
	I0725 20:31:46.075096       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4e781833-7204-4708-960e-a4c71b96d938", APIVersion:"v1", ResourceVersion:"478", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-20220725132539-44543_c95feabb-f3f4-4425-9b90-9a6be90a8ff0 became leader
	I0725 20:31:46.176368       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-20220725132539-44543_c95feabb-f3f4-4425-9b90-9a6be90a8ff0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220725132539-44543 -n embed-certs-20220725132539-44543

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-20220725132539-44543 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-5c6f97fb75-5w696
helpers_test.go:272: ======> post-mortem[TestStartStop/group/embed-certs/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context embed-certs-20220725132539-44543 describe pod metrics-server-5c6f97fb75-5w696
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context embed-certs-20220725132539-44543 describe pod metrics-server-5c6f97fb75-5w696: exit status 1 (298.783629ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5c6f97fb75-5w696" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context embed-certs-20220725132539-44543 describe pod metrics-server-5c6f97fb75-5w696: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220725132539-44543
helpers_test.go:235: (dbg) docker inspect embed-certs-20220725132539-44543:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a26cbaf6dca3be2744afa29981920d6168c5fde76fe283eeaed07cfc93cee7e4",
	        "Created": "2022-07-25T20:25:47.150688175Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 272320,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-25T20:26:49.821137669Z",
	            "FinishedAt": "2022-07-25T20:26:47.876564535Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/a26cbaf6dca3be2744afa29981920d6168c5fde76fe283eeaed07cfc93cee7e4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a26cbaf6dca3be2744afa29981920d6168c5fde76fe283eeaed07cfc93cee7e4/hostname",
	        "HostsPath": "/var/lib/docker/containers/a26cbaf6dca3be2744afa29981920d6168c5fde76fe283eeaed07cfc93cee7e4/hosts",
	        "LogPath": "/var/lib/docker/containers/a26cbaf6dca3be2744afa29981920d6168c5fde76fe283eeaed07cfc93cee7e4/a26cbaf6dca3be2744afa29981920d6168c5fde76fe283eeaed07cfc93cee7e4-json.log",
	        "Name": "/embed-certs-20220725132539-44543",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20220725132539-44543:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20220725132539-44543",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/feab8737e663edaef6645a883c879cca5d2ef1241abf71121e302e4ffafe275a-init/diff:/var/lib/docker/overlay2/d54b85ebfe416686b2558e2807bd5eef62d8c3964c35bbc22b4965093a7c57f4/diff:/var/lib/docker/overlay2/5cb3ff6e7921802d23fd6cba2490df2c36857ad3d97eedcf6e537d1527a37f60/diff:/var/lib/docker/overlay2/e9074b1b2294cc5a00d3237646a71c4d30d89748eadbae26dd532d6a475c9851/diff:/var/lib/docker/overlay2/df8929c4859ecf2c8807040b0174a3f0dfa8da01532ea961cd7f4f2f0ccfbfc2/diff:/var/lib/docker/overlay2/177329168f9a1043ce94a2b7b32958779e8f685e85abf8e4c335d03d23295b26/diff:/var/lib/docker/overlay2/9eef19133d2d5e0111523234481658c1f73d7552db59842ad9ca687debda8fb3/diff:/var/lib/docker/overlay2/5c3a4cdf915d1e65e3003feefb9b9e649b432658e093e04971d5cb79e610dba9/diff:/var/lib/docker/overlay2/d1aeff9fecf983db994270b4acf2a2310c790c64d42c58d436aec3238f88b36c/diff:/var/lib/docker/overlay2/580dc19745fb3c1352983857ac5a555954893045702e7867e651e15a2d6d8732/diff:/var/lib/docker/overlay2/6c77b3
2028ad3ef74c2faf89b5080529c0391907ee9c1d7bbc00c25a5340238b/diff:/var/lib/docker/overlay2/e9e20374d05015c7a4f19eb8e14e663122cd2aef66d761bedfdef138551230b8/diff:/var/lib/docker/overlay2/cd6d64933d189089a7aaf08d7c7db50b89cc981d03b8272e4fdedb65495c3e4c/diff:/var/lib/docker/overlay2/768244a14ae888bb2a08747182035c24bd2a6a7a67f1ddae7b4e60efccb01339/diff:/var/lib/docker/overlay2/f90a95060678580a1a90ee9a563996249f7606bffcd06b680ef15234b56ab4a6/diff:/var/lib/docker/overlay2/49310159b9be5ac9bfa1dca18d816535ceb12e228963adafcf71f9d7d52d95ec/diff:/var/lib/docker/overlay2/cef5c40c08cddf01fe5ea252f4a28586c0bd1296ddc38fc37fc7e34af83c3c30/diff:/var/lib/docker/overlay2/b31bdcb9b1a69c5230400a4e1f5650c1cc8f9e27e673ff43708f9a450106733e/diff:/var/lib/docker/overlay2/899516301b3ffc21aee7cb9984f97d944fe5af0f2cd0bc5a5895bf187e4bff72/diff:/var/lib/docker/overlay2/f1bd0d2fc2ce458c0b6364dde56610c3d26bf8dc2cca2ae4e34a46f173f44595/diff:/var/lib/docker/overlay2/9f0da14f9c19d5a719554996b34f3ef80a0378921181aaf89ca41339ccc5bee3/diff:/var/lib/d
ocker/overlay2/4d6e8c40f7c65cbcfa827b62273eb37d2a0bb0407ee161b80523f11c157a52d4/diff:/var/lib/docker/overlay2/434355bebe182fb5c97b07ad8cf7a59b5474f8bc71379ff4f9690ffe31837e63/diff:/var/lib/docker/overlay2/128fb199261eb4fea9e79111861223ddb0d84438454cb7c0b9a61cc73d0796c3/diff:/var/lib/docker/overlay2/11c9cb7e5f15ac43115cb739da1135a53e08dc94ee1da27441e1d6c79eb73e62/diff:/var/lib/docker/overlay2/5e6bf225209b025da3db5a2d8862c3a0ee64417fa809d026e010644fc6619b48/diff:/var/lib/docker/overlay2/265a2d12f86abb8a607a06ec6f225186dfe622f7e23cbbf146a2ceffc7bc9bdc/diff:/var/lib/docker/overlay2/e9140b8aea381db0faf833cd2d12ff40267febe63f7c6bc2fa27384dbadf89bc/diff:/var/lib/docker/overlay2/07d7d0ab38cb08c95279e0706a65a6a4aeff8b5e72d559f8bc3dfcc0895654d6/diff:/var/lib/docker/overlay2/3a3d8346fb066863283ad39cf7f43615d7a1e99dc12aa7e8892d85d1c550bc63/diff:/var/lib/docker/overlay2/e5317b212815c832ebd2451ddd6e1f6922f350c5c1651ccfef7c5a8f34b7c1e3/diff:/var/lib/docker/overlay2/3bd7d98f0f0bf7ad2e29344a6ab4ca3211616e390da35077e76565c2074
df852/diff:/var/lib/docker/overlay2/ff63d2045cce1bb51b201079bba9d2b2a666b7331699b9407801b2fc00792bb3/diff:/var/lib/docker/overlay2/545bdf3f77f20e3db68875eda0a8829facb605119a630956ab79323688a3562a/diff:/var/lib/docker/overlay2/8f9b40124ec5ada59184358938542e028ca1e234ef1f6bce1816becf6ec8354e/diff:/var/lib/docker/overlay2/71b919cd2ece4207294cb577adab54fdba87cd9f7fdca1a53d6f376f87f11151/diff:/var/lib/docker/overlay2/b2d4a34fe28bd395d9b4ba0120173fc9df90c36a8a60a309ec088eca8930768c/diff:/var/lib/docker/overlay2/e986f180cbc34391c1ea6665283859b4c3c3061156ea1ff7196ffd869741e591/diff:/var/lib/docker/overlay2/cb98df203d42ea558f11fe84300b687cb65e6f3401b377a4466c9c8ef276ec59/diff:/var/lib/docker/overlay2/6371f9c0528fb1fb4a17bfcf3a34877fdd6d43dca1b5f06aff998d43d2010c46/diff:/var/lib/docker/overlay2/79ceef8e89c4094715c05bafb8c1427d09fb4a99f71f1a6186299303af352c98/diff:/var/lib/docker/overlay2/e034e76eeaff4e68d5158ffe54b55d122124ad82998be242fa08bec3010a0e64/diff",
	                "MergedDir": "/var/lib/docker/overlay2/feab8737e663edaef6645a883c879cca5d2ef1241abf71121e302e4ffafe275a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/feab8737e663edaef6645a883c879cca5d2ef1241abf71121e302e4ffafe275a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/feab8737e663edaef6645a883c879cca5d2ef1241abf71121e302e4ffafe275a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20220725132539-44543",
	                "Source": "/var/lib/docker/volumes/embed-certs-20220725132539-44543/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20220725132539-44543",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20220725132539-44543",
	                "name.minikube.sigs.k8s.io": "embed-certs-20220725132539-44543",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f1761eaa7d1b2b77ab78790376d6fa7503f1514dcb85774044a1ed29a4cee40c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59422"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59423"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59424"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59425"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59426"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f1761eaa7d1b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20220725132539-44543": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "a26cbaf6dca3",
	                        "embed-certs-20220725132539-44543"
	                    ],
	                    "NetworkID": "b9478eb32f8b0a21795cbbbab1e802bcb76d9edc2f3ea05b264734f7d0a9eaf5",
	                    "EndpointID": "79c6dcaa036d2e9475158bdb7afc0294181cc139928c22f65085c3db00a5932e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220725132539-44543 -n embed-certs-20220725132539-44543
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p embed-certs-20220725132539-44543 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p embed-certs-20220725132539-44543 logs -n 25: (2.809338701s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|--------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |               Profile                |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|--------------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                | old-k8s-version-20220725131610-44543 | jenkins | v1.26.0 | 25 Jul 22 13:16 PDT |                     |
	|         | old-k8s-version-20220725131610-44543              |                                      |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                      |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                      |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                      |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                      |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                      |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                      |         |         |                     |                     |
	| ssh     | -p                                                | kubenet-20220725125922-44543         | jenkins | v1.26.0 | 25 Jul 22 13:16 PDT | 25 Jul 22 13:16 PDT |
	|         | kubenet-20220725125922-44543                      |                                      |         |         |                     |                     |
	|         | pgrep -a kubelet                                  |                                      |         |         |                     |                     |
	| delete  | -p                                                | kubenet-20220725125922-44543         | jenkins | v1.26.0 | 25 Jul 22 13:17 PDT | 25 Jul 22 13:17 PDT |
	|         | kubenet-20220725125922-44543                      |                                      |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220725131741-44543      | jenkins | v1.26.0 | 25 Jul 22 13:17 PDT | 25 Jul 22 13:19 PDT |
	|         | no-preload-20220725131741-44543                   |                                      |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                      |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                      |         |         |                     |                     |
	|         | --driver=docker                                   |                                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |                                      |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220725131741-44543      | jenkins | v1.26.0 | 25 Jul 22 13:19 PDT | 25 Jul 22 13:19 PDT |
	|         | no-preload-20220725131741-44543                   |                                      |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                      |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                      |         |         |                     |                     |
	| stop    | -p                                                | no-preload-20220725131741-44543      | jenkins | v1.26.0 | 25 Jul 22 13:19 PDT | 25 Jul 22 13:19 PDT |
	|         | no-preload-20220725131741-44543                   |                                      |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                      |         |         |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220725131741-44543      | jenkins | v1.26.0 | 25 Jul 22 13:19 PDT | 25 Jul 22 13:19 PDT |
	|         | no-preload-20220725131741-44543                   |                                      |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                      |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220725131741-44543      | jenkins | v1.26.0 | 25 Jul 22 13:19 PDT | 25 Jul 22 13:24 PDT |
	|         | no-preload-20220725131741-44543                   |                                      |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                      |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                      |         |         |                     |                     |
	|         | --driver=docker                                   |                                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |                                      |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | old-k8s-version-20220725131610-44543 | jenkins | v1.26.0 | 25 Jul 22 13:20 PDT |                     |
	|         | old-k8s-version-20220725131610-44543              |                                      |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                      |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                      |         |         |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220725131610-44543 | jenkins | v1.26.0 | 25 Jul 22 13:21 PDT | 25 Jul 22 13:21 PDT |
	|         | old-k8s-version-20220725131610-44543              |                                      |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                      |         |         |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220725131610-44543 | jenkins | v1.26.0 | 25 Jul 22 13:21 PDT | 25 Jul 22 13:21 PDT |
	|         | old-k8s-version-20220725131610-44543              |                                      |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                      |         |         |                     |                     |
	| start   | -p                                                | old-k8s-version-20220725131610-44543 | jenkins | v1.26.0 | 25 Jul 22 13:21 PDT |                     |
	|         | old-k8s-version-20220725131610-44543              |                                      |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                      |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                      |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                      |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                      |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                      |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                      |         |         |                     |                     |
	| ssh     | -p                                                | no-preload-20220725131741-44543      | jenkins | v1.26.0 | 25 Jul 22 13:24 PDT | 25 Jul 22 13:24 PDT |
	|         | no-preload-20220725131741-44543                   |                                      |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                      |         |         |                     |                     |
	| pause   | -p                                                | no-preload-20220725131741-44543      | jenkins | v1.26.0 | 25 Jul 22 13:24 PDT | 25 Jul 22 13:24 PDT |
	|         | no-preload-20220725131741-44543                   |                                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                      |         |         |                     |                     |
	| unpause | -p                                                | no-preload-20220725131741-44543      | jenkins | v1.26.0 | 25 Jul 22 13:25 PDT | 25 Jul 22 13:25 PDT |
	|         | no-preload-20220725131741-44543                   |                                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                      |         |         |                     |                     |
	| delete  | -p                                                | no-preload-20220725131741-44543      | jenkins | v1.26.0 | 25 Jul 22 13:25 PDT | 25 Jul 22 13:25 PDT |
	|         | no-preload-20220725131741-44543                   |                                      |         |         |                     |                     |
	| delete  | -p                                                | no-preload-20220725131741-44543      | jenkins | v1.26.0 | 25 Jul 22 13:25 PDT | 25 Jul 22 13:25 PDT |
	|         | no-preload-20220725131741-44543                   |                                      |         |         |                     |                     |
	| start   | -p                                                | embed-certs-20220725132539-44543     | jenkins | v1.26.0 | 25 Jul 22 13:25 PDT | 25 Jul 22 13:26 PDT |
	|         | embed-certs-20220725132539-44543                  |                                      |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                      |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                      |         |         |                     |                     |
	|         | --driver=docker                                   |                                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |                                      |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | embed-certs-20220725132539-44543     | jenkins | v1.26.0 | 25 Jul 22 13:26 PDT | 25 Jul 22 13:26 PDT |
	|         | embed-certs-20220725132539-44543                  |                                      |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                      |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                      |         |         |                     |                     |
	| stop    | -p                                                | embed-certs-20220725132539-44543     | jenkins | v1.26.0 | 25 Jul 22 13:26 PDT | 25 Jul 22 13:26 PDT |
	|         | embed-certs-20220725132539-44543                  |                                      |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                      |         |         |                     |                     |
	| addons  | enable dashboard -p                               | embed-certs-20220725132539-44543     | jenkins | v1.26.0 | 25 Jul 22 13:26 PDT | 25 Jul 22 13:26 PDT |
	|         | embed-certs-20220725132539-44543                  |                                      |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                      |         |         |                     |                     |
	| start   | -p                                                | embed-certs-20220725132539-44543     | jenkins | v1.26.0 | 25 Jul 22 13:26 PDT | 25 Jul 22 13:31 PDT |
	|         | embed-certs-20220725132539-44543                  |                                      |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                      |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                      |         |         |                     |                     |
	|         | --driver=docker                                   |                                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |                                      |         |         |                     |                     |
	| ssh     | -p                                                | embed-certs-20220725132539-44543     | jenkins | v1.26.0 | 25 Jul 22 13:32 PDT | 25 Jul 22 13:32 PDT |
	|         | embed-certs-20220725132539-44543                  |                                      |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                      |         |         |                     |                     |
	| pause   | -p                                                | embed-certs-20220725132539-44543     | jenkins | v1.26.0 | 25 Jul 22 13:32 PDT | 25 Jul 22 13:32 PDT |
	|         | embed-certs-20220725132539-44543                  |                                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                      |         |         |                     |                     |
	| unpause | -p                                                | embed-certs-20220725132539-44543     | jenkins | v1.26.0 | 25 Jul 22 13:32 PDT | 25 Jul 22 13:32 PDT |
	|         | embed-certs-20220725132539-44543                  |                                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                      |         |         |                     |                     |
	|---------|---------------------------------------------------|--------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/25 13:26:48
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 13:26:48.547427   60896 out.go:296] Setting OutFile to fd 1 ...
	I0725 13:26:48.547663   60896 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 13:26:48.547668   60896 out.go:309] Setting ErrFile to fd 2...
	I0725 13:26:48.547672   60896 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 13:26:48.547782   60896 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/bin
	I0725 13:26:48.548312   60896 out.go:303] Setting JSON to false
	I0725 13:26:48.563654   60896 start.go:115] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":15980,"bootTime":1658764828,"procs":355,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0725 13:26:48.563800   60896 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0725 13:26:48.585799   60896 out.go:177] * [embed-certs-20220725132539-44543] minikube v1.26.0 on Darwin 12.4
	I0725 13:26:48.627711   60896 notify.go:193] Checking for updates...
	I0725 13:26:48.648811   60896 out.go:177]   - MINIKUBE_LOCATION=14555
	I0725 13:26:48.669719   60896 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	I0725 13:26:48.690638   60896 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 13:26:48.712084   60896 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 13:26:48.734191   60896 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube
	I0725 13:26:48.756550   60896 config.go:178] Loaded profile config "embed-certs-20220725132539-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0725 13:26:48.757217   60896 driver.go:365] Setting default libvirt URI to qemu:///system
	I0725 13:26:48.825997   60896 docker.go:137] docker version: linux-20.10.17
	I0725 13:26:48.826132   60896 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 13:26:48.960295   60896 info.go:265] docker info: {ID:DEPZ:X5EQ:JUVI:7TAY:IXPP:M3QT:KGJK:NK3N:YSWM:HAHS:YLUF:RBE6 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 20:26:48.899440621 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 13:26:49.004155   60896 out.go:177] * Using the docker driver based on existing profile
	I0725 13:26:49.025981   60896 start.go:284] selected driver: docker
	I0725 13:26:49.026017   60896 start.go:808] validating driver "docker" against &{Name:embed-certs-20220725132539-44543 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:embed-certs-20220725132539-44543 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s Schedu
ledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 13:26:49.026150   60896 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 13:26:49.029491   60896 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 13:26:49.162003   60896 info.go:265] docker info: {ID:DEPZ:X5EQ:JUVI:7TAY:IXPP:M3QT:KGJK:NK3N:YSWM:HAHS:YLUF:RBE6 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 20:26:49.103146968 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 13:26:49.162174   60896 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 13:26:49.162190   60896 cni.go:95] Creating CNI manager for ""
	I0725 13:26:49.162199   60896 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 13:26:49.162223   60896 start_flags.go:310] config:
	{Name:embed-certs-20220725132539-44543 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:embed-certs-20220725132539-44543 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cl
uster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 13:26:49.205870   60896 out.go:177] * Starting control plane node embed-certs-20220725132539-44543 in cluster embed-certs-20220725132539-44543
	I0725 13:26:49.226856   60896 cache.go:120] Beginning downloading kic base image for docker with docker
	I0725 13:26:49.249040   60896 out.go:177] * Pulling base image ...
	I0725 13:26:49.291616   60896 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0725 13:26:49.291652   60896 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0725 13:26:49.291684   60896 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4
	I0725 13:26:49.291697   60896 cache.go:57] Caching tarball of preloaded images
	I0725 13:26:49.291833   60896 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0725 13:26:49.291855   60896 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.2 on docker
	I0725 13:26:49.292505   60896 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/embed-certs-20220725132539-44543/config.json ...
	I0725 13:26:49.355938   60896 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0725 13:26:49.355966   60896 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0725 13:26:49.355978   60896 cache.go:208] Successfully downloaded all kic artifacts
	I0725 13:26:49.356021   60896 start.go:370] acquiring machines lock for embed-certs-20220725132539-44543: {Name:mkedcda8c6ffd244a6eb5ea62b1d8110eb07449c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 13:26:49.356105   60896 start.go:374] acquired machines lock for "embed-certs-20220725132539-44543" in 59.916µs
	I0725 13:26:49.356125   60896 start.go:95] Skipping create...Using existing machine configuration
	I0725 13:26:49.356136   60896 fix.go:55] fixHost starting: 
	I0725 13:26:49.356360   60896 cli_runner.go:164] Run: docker container inspect embed-certs-20220725132539-44543 --format={{.State.Status}}
	I0725 13:26:49.424125   60896 fix.go:103] recreateIfNeeded on embed-certs-20220725132539-44543: state=Stopped err=<nil>
	W0725 13:26:49.424176   60896 fix.go:129] unexpected machine state, will restart: <nil>
	I0725 13:26:49.446453   60896 out.go:177] * Restarting existing docker container for "embed-certs-20220725132539-44543" ...
	I0725 13:26:49.468132   60896 cli_runner.go:164] Run: docker start embed-certs-20220725132539-44543
	I0725 13:26:49.813394   60896 cli_runner.go:164] Run: docker container inspect embed-certs-20220725132539-44543 --format={{.State.Status}}
	I0725 13:26:49.886745   60896 kic.go:415] container "embed-certs-20220725132539-44543" state is running.
	I0725 13:26:49.887403   60896 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220725132539-44543
	I0725 13:26:49.963095   60896 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/embed-certs-20220725132539-44543/config.json ...
	I0725 13:26:49.963502   60896 machine.go:88] provisioning docker machine ...
	I0725 13:26:49.963527   60896 ubuntu.go:169] provisioning hostname "embed-certs-20220725132539-44543"
	I0725 13:26:49.963596   60896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725132539-44543
	I0725 13:26:50.039063   60896 main.go:134] libmachine: Using SSH client type: native
	I0725 13:26:50.039288   60896 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 59422 <nil> <nil>}
	I0725 13:26:50.039301   60896 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220725132539-44543 && echo "embed-certs-20220725132539-44543" | sudo tee /etc/hostname
	I0725 13:26:50.170431   60896 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220725132539-44543
	
	I0725 13:26:50.170514   60896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725132539-44543
	I0725 13:26:50.246235   60896 main.go:134] libmachine: Using SSH client type: native
	I0725 13:26:50.246398   60896 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 59422 <nil> <nil>}
	I0725 13:26:50.246415   60896 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220725132539-44543' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220725132539-44543/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220725132539-44543' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 13:26:50.365664   60896 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 13:26:50.365688   60896 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube}
	I0725 13:26:50.365709   60896 ubuntu.go:177] setting up certificates
	I0725 13:26:50.365719   60896 provision.go:83] configureAuth start
	I0725 13:26:50.365796   60896 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220725132539-44543
	I0725 13:26:50.440349   60896 provision.go:138] copyHostCerts
	I0725 13:26:50.440475   60896 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem, removing ...
	I0725 13:26:50.440485   60896 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem
	I0725 13:26:50.440587   60896 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem (1082 bytes)
	I0725 13:26:50.440815   60896 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem, removing ...
	I0725 13:26:50.440830   60896 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem
	I0725 13:26:50.440890   60896 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem (1123 bytes)
	I0725 13:26:50.441056   60896 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem, removing ...
	I0725 13:26:50.441062   60896 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem
	I0725 13:26:50.441120   60896 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem (1675 bytes)
	I0725 13:26:50.441275   60896 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220725132539-44543 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220725132539-44543]
	I0725 13:26:50.557687   60896 provision.go:172] copyRemoteCerts
	I0725 13:26:50.557751   60896 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 13:26:50.557825   60896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725132539-44543
	I0725 13:26:50.629344   60896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59422 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/embed-certs-20220725132539-44543/id_rsa Username:docker}
	I0725 13:26:50.718627   60896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0725 13:26:50.735715   60896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0725 13:26:50.751806   60896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0725 13:26:50.768118   60896 provision.go:86] duration metric: configureAuth took 402.373037ms
	I0725 13:26:50.768132   60896 ubuntu.go:193] setting minikube options for container-runtime
	I0725 13:26:50.768266   60896 config.go:178] Loaded profile config "embed-certs-20220725132539-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0725 13:26:50.768315   60896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725132539-44543
	I0725 13:26:50.840378   60896 main.go:134] libmachine: Using SSH client type: native
	I0725 13:26:50.840536   60896 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 59422 <nil> <nil>}
	I0725 13:26:50.840548   60896 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 13:26:50.965802   60896 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0725 13:26:50.965819   60896 ubuntu.go:71] root file system type: overlay
	I0725 13:26:50.966002   60896 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 13:26:50.966080   60896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725132539-44543
	I0725 13:26:51.036849   60896 main.go:134] libmachine: Using SSH client type: native
	I0725 13:26:51.036995   60896 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 59422 <nil> <nil>}
	I0725 13:26:51.037043   60896 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 13:26:51.167067   60896 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 13:26:51.167151   60896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725132539-44543
	I0725 13:26:51.237871   60896 main.go:134] libmachine: Using SSH client type: native
	I0725 13:26:51.238049   60896 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 59422 <nil> <nil>}
	I0725 13:26:51.238062   60896 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 13:26:51.363554   60896 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 13:26:51.363567   60896 machine.go:91] provisioned docker machine in 1.400015538s
	I0725 13:26:51.363577   60896 start.go:307] post-start starting for "embed-certs-20220725132539-44543" (driver="docker")
	I0725 13:26:51.363582   60896 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 13:26:51.363643   60896 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 13:26:51.363691   60896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725132539-44543
	I0725 13:26:51.437205   60896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59422 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/embed-certs-20220725132539-44543/id_rsa Username:docker}
	I0725 13:26:51.527183   60896 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 13:26:51.530742   60896 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0725 13:26:51.530759   60896 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0725 13:26:51.530765   60896 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0725 13:26:51.530770   60896 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0725 13:26:51.530783   60896 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/addons for local assets ...
	I0725 13:26:51.530909   60896 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files for local assets ...
	I0725 13:26:51.531049   60896 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem -> 445432.pem in /etc/ssl/certs
	I0725 13:26:51.531209   60896 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 13:26:51.538152   60896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem --> /etc/ssl/certs/445432.pem (1708 bytes)
	I0725 13:26:51.555946   60896 start.go:310] post-start completed in 192.354602ms
	I0725 13:26:51.556040   60896 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 13:26:51.556105   60896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725132539-44543
	I0725 13:26:51.627974   60896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59422 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/embed-certs-20220725132539-44543/id_rsa Username:docker}
	I0725 13:26:51.714198   60896 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0725 13:26:51.718597   60896 fix.go:57] fixHost completed within 2.362392942s
	I0725 13:26:51.718610   60896 start.go:82] releasing machines lock for "embed-certs-20220725132539-44543", held for 2.362429297s
	I0725 13:26:51.718700   60896 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220725132539-44543
	I0725 13:26:51.789671   60896 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0725 13:26:51.789678   60896 ssh_runner.go:195] Run: systemctl --version
	I0725 13:26:51.789751   60896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725132539-44543
	I0725 13:26:51.789757   60896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725132539-44543
	I0725 13:26:51.866411   60896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59422 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/embed-certs-20220725132539-44543/id_rsa Username:docker}
	I0725 13:26:51.867863   60896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59422 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/embed-certs-20220725132539-44543/id_rsa Username:docker}
	I0725 13:26:52.170482   60896 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 13:26:52.180365   60896 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0725 13:26:52.180423   60896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 13:26:52.191899   60896 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 13:26:52.204163   60896 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 13:26:52.271546   60896 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 13:26:52.341953   60896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 13:26:52.404515   60896 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 13:26:52.623120   60896 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0725 13:26:52.693995   60896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 13:26:52.758221   60896 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0725 13:26:52.767593   60896 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0725 13:26:52.767655   60896 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0725 13:26:52.771384   60896 start.go:471] Will wait 60s for crictl version
	I0725 13:26:52.771432   60896 ssh_runner.go:195] Run: sudo crictl version
	I0725 13:26:52.874937   60896 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0725 13:26:52.875000   60896 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 13:26:52.909296   60896 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 13:26:52.986216   60896 out.go:204] * Preparing Kubernetes v1.24.2 on Docker 20.10.17 ...
	I0725 13:26:52.986400   60896 cli_runner.go:164] Run: docker exec -t embed-certs-20220725132539-44543 dig +short host.docker.internal
	I0725 13:26:53.115923   60896 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0725 13:26:53.116029   60896 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0725 13:26:53.121448   60896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 13:26:53.131166   60896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220725132539-44543
	I0725 13:26:53.203036   60896 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0725 13:26:53.203111   60896 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 13:26:53.232252   60896 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.2
	k8s.gcr.io/kube-proxy:v1.24.2
	k8s.gcr.io/kube-scheduler:v1.24.2
	k8s.gcr.io/kube-controller-manager:v1.24.2
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0725 13:26:53.232269   60896 docker.go:542] Images already preloaded, skipping extraction
	I0725 13:26:53.232348   60896 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 13:26:53.262252   60896 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.2
	k8s.gcr.io/kube-scheduler:v1.24.2
	k8s.gcr.io/kube-controller-manager:v1.24.2
	k8s.gcr.io/kube-proxy:v1.24.2
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0725 13:26:53.262272   60896 cache_images.go:84] Images are preloaded, skipping loading
	I0725 13:26:53.262351   60896 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0725 13:26:53.333671   60896 cni.go:95] Creating CNI manager for ""
	I0725 13:26:53.333682   60896 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 13:26:53.333696   60896 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0725 13:26:53.333709   60896 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220725132539-44543 NodeName:embed-certs-20220725132539-44543 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile
:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0725 13:26:53.333811   60896 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "embed-certs-20220725132539-44543"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 13:26:53.333903   60896 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=embed-certs-20220725132539-44543 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.2 ClusterName:embed-certs-20220725132539-44543 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0725 13:26:53.333962   60896 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
	I0725 13:26:53.341316   60896 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 13:26:53.341375   60896 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 13:26:53.348729   60896 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (494 bytes)
	I0725 13:26:53.360708   60896 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 13:26:53.372857   60896 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2054 bytes)
	I0725 13:26:53.385380   60896 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0725 13:26:53.388890   60896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 13:26:53.398360   60896 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/embed-certs-20220725132539-44543 for IP: 192.168.76.2
	I0725 13:26:53.398470   60896 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.key
	I0725 13:26:53.398520   60896 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.key
	I0725 13:26:53.398593   60896 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/embed-certs-20220725132539-44543/client.key
	I0725 13:26:53.398650   60896 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/embed-certs-20220725132539-44543/apiserver.key.31bdca25
	I0725 13:26:53.398698   60896 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/embed-certs-20220725132539-44543/proxy-client.key
	I0725 13:26:53.398918   60896 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543.pem (1338 bytes)
	W0725 13:26:53.398960   60896 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543_empty.pem, impossibly tiny 0 bytes
	I0725 13:26:53.398971   60896 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 13:26:53.399004   60896 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem (1082 bytes)
	I0725 13:26:53.399033   60896 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem (1123 bytes)
	I0725 13:26:53.399058   60896 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem (1675 bytes)
	I0725 13:26:53.399119   60896 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem (1708 bytes)
	I0725 13:26:53.399636   60896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/embed-certs-20220725132539-44543/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0725 13:26:53.416223   60896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/embed-certs-20220725132539-44543/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 13:26:53.432572   60896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/embed-certs-20220725132539-44543/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 13:26:53.449196   60896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/embed-certs-20220725132539-44543/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 13:26:53.465993   60896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 13:26:53.482339   60896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 13:26:53.498714   60896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 13:26:53.515036   60896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0725 13:26:53.531395   60896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543.pem --> /usr/share/ca-certificates/44543.pem (1338 bytes)
	I0725 13:26:53.547950   60896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem --> /usr/share/ca-certificates/445432.pem (1708 bytes)
	I0725 13:26:53.587127   60896 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 13:26:53.603886   60896 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 13:26:53.616328   60896 ssh_runner.go:195] Run: openssl version
	I0725 13:26:53.621375   60896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44543.pem && ln -fs /usr/share/ca-certificates/44543.pem /etc/ssl/certs/44543.pem"
	I0725 13:26:53.628836   60896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44543.pem
	I0725 13:26:53.632532   60896 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 25 19:24 /usr/share/ca-certificates/44543.pem
	I0725 13:26:53.632580   60896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44543.pem
	I0725 13:26:53.637683   60896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/44543.pem /etc/ssl/certs/51391683.0"
	I0725 13:26:53.644581   60896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/445432.pem && ln -fs /usr/share/ca-certificates/445432.pem /etc/ssl/certs/445432.pem"
	I0725 13:26:53.652216   60896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/445432.pem
	I0725 13:26:53.655971   60896 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 25 19:24 /usr/share/ca-certificates/445432.pem
	I0725 13:26:53.656010   60896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/445432.pem
	I0725 13:26:53.661284   60896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/445432.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 13:26:53.668359   60896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 13:26:53.676006   60896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 13:26:53.679917   60896 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 25 19:19 /usr/share/ca-certificates/minikubeCA.pem
	I0725 13:26:53.680017   60896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 13:26:53.685793   60896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 13:26:53.692646   60896 kubeadm.go:395] StartCluster: {Name:embed-certs-20220725132539-44543 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:embed-certs-20220725132539-44543 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expose
dPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 13:26:53.692743   60896 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 13:26:53.721978   60896 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 13:26:53.729370   60896 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0725 13:26:53.729381   60896 kubeadm.go:626] restartCluster start
	I0725 13:26:53.729418   60896 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 13:26:53.736072   60896 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:53.736123   60896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220725132539-44543
	I0725 13:26:53.808101   60896 kubeconfig.go:116] verify returned: extract IP: "embed-certs-20220725132539-44543" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	I0725 13:26:53.808262   60896 kubeconfig.go:127] "embed-certs-20220725132539-44543" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig - will repair!
	I0725 13:26:53.808621   60896 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig: {Name:mk33e723b045d7cbb2c524ed31a7e78a4a0b6415 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 13:26:53.809797   60896 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 13:26:53.817648   60896 api_server.go:165] Checking apiserver status ...
	I0725 13:26:53.817713   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:26:53.826733   60896 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:54.027462   60896 api_server.go:165] Checking apiserver status ...
	I0725 13:26:54.027716   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:26:54.038403   60896 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:54.227398   60896 api_server.go:165] Checking apiserver status ...
	I0725 13:26:54.227644   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:26:54.238236   60896 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:54.427576   60896 api_server.go:165] Checking apiserver status ...
	I0725 13:26:54.427755   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:26:54.438756   60896 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:54.627358   60896 api_server.go:165] Checking apiserver status ...
	I0725 13:26:54.627497   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:26:54.636422   60896 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:54.827394   60896 api_server.go:165] Checking apiserver status ...
	I0725 13:26:54.827487   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:26:54.838488   60896 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:55.026933   60896 api_server.go:165] Checking apiserver status ...
	I0725 13:26:55.027049   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:26:55.037485   60896 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:55.226880   60896 api_server.go:165] Checking apiserver status ...
	I0725 13:26:55.226957   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:26:55.235857   60896 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:55.427840   60896 api_server.go:165] Checking apiserver status ...
	I0725 13:26:55.428001   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:26:55.438429   60896 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:55.628967   60896 api_server.go:165] Checking apiserver status ...
	I0725 13:26:55.629079   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:26:55.639603   60896 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:55.828963   60896 api_server.go:165] Checking apiserver status ...
	I0725 13:26:55.829161   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:26:55.839558   60896 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:56.027028   60896 api_server.go:165] Checking apiserver status ...
	I0725 13:26:56.027119   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:26:56.037616   60896 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:56.229013   60896 api_server.go:165] Checking apiserver status ...
	I0725 13:26:56.229229   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:26:56.239416   60896 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:56.427054   60896 api_server.go:165] Checking apiserver status ...
	I0725 13:26:56.427246   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:26:56.437328   60896 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:56.628631   60896 api_server.go:165] Checking apiserver status ...
	I0725 13:26:56.628739   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:26:56.639033   60896 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:56.826934   60896 api_server.go:165] Checking apiserver status ...
	I0725 13:26:56.826996   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:26:56.835856   60896 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:56.835867   60896 api_server.go:165] Checking apiserver status ...
	I0725 13:26:56.835926   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:26:56.844515   60896 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:56.844534   60896 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0725 13:26:56.844542   60896 kubeadm.go:1092] stopping kube-system containers ...
	I0725 13:26:56.844600   60896 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 13:26:56.878986   60896 docker.go:443] Stopping containers: [c3d79be36829 1c51316a3481 68bdaf34cc2f 7c6f3ac7c5f3 79f7459bb476 b7549c872bf4 ff7140875b14 b8bc65908490 2261dd283394 99e1f7baa7d0 a2c3192c3c39 f358469cafac 4eba7ec75371 e84371b0922e 3ff0cb9c7d63 22853dac1834]
	I0725 13:26:56.879060   60896 ssh_runner.go:195] Run: docker stop c3d79be36829 1c51316a3481 68bdaf34cc2f 7c6f3ac7c5f3 79f7459bb476 b7549c872bf4 ff7140875b14 b8bc65908490 2261dd283394 99e1f7baa7d0 a2c3192c3c39 f358469cafac 4eba7ec75371 e84371b0922e 3ff0cb9c7d63 22853dac1834
	I0725 13:26:56.908333   60896 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 13:26:56.918203   60896 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 13:26:56.925547   60896 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jul 25 20:25 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jul 25 20:25 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2067 Jul 25 20:26 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jul 25 20:25 /etc/kubernetes/scheduler.conf
	
	I0725 13:26:56.925599   60896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 13:26:56.932577   60896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 13:26:56.939336   60896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 13:26:56.946087   60896 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:56.946134   60896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 13:26:56.952735   60896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 13:26:56.959517   60896 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:26:56.959565   60896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 13:26:56.965970   60896 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 13:26:56.972930   60896 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0725 13:26:56.972940   60896 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:26:57.017926   60896 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:26:57.767698   60896 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:26:57.943236   60896 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:26:57.991345   60896 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:26:58.050211   60896 api_server.go:51] waiting for apiserver process to appear ...
	I0725 13:26:58.050286   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:26:58.582245   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:26:59.082479   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:26:59.094188   60896 api_server.go:71] duration metric: took 1.043948998s to wait for apiserver process to appear ...
	I0725 13:26:59.094209   60896 api_server.go:87] waiting for apiserver healthz status ...
	I0725 13:26:59.094231   60896 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59426/healthz ...
	I0725 13:26:59.095509   60896 api_server.go:256] stopped: https://127.0.0.1:59426/healthz: Get "https://127.0.0.1:59426/healthz": EOF
	I0725 13:26:59.596003   60896 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59426/healthz ...
	I0725 13:27:02.411623   60896 api_server.go:266] https://127.0.0.1:59426/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0725 13:27:02.411651   60896 api_server.go:102] status: https://127.0.0.1:59426/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0725 13:27:02.597905   60896 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59426/healthz ...
	I0725 13:27:02.607032   60896 api_server.go:266] https://127.0.0.1:59426/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 13:27:02.607059   60896 api_server.go:102] status: https://127.0.0.1:59426/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 13:27:03.095805   60896 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59426/healthz ...
	I0725 13:27:03.102036   60896 api_server.go:266] https://127.0.0.1:59426/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 13:27:03.102057   60896 api_server.go:102] status: https://127.0.0.1:59426/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 13:27:03.595754   60896 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59426/healthz ...
	I0725 13:27:03.601497   60896 api_server.go:266] https://127.0.0.1:59426/healthz returned 200:
	ok
	I0725 13:27:03.613887   60896 api_server.go:140] control plane version: v1.24.2
	I0725 13:27:03.613902   60896 api_server.go:130] duration metric: took 4.51955617s to wait for apiserver health ...
	I0725 13:27:03.613908   60896 cni.go:95] Creating CNI manager for ""
	I0725 13:27:03.613912   60896 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 13:27:03.613920   60896 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 13:27:03.621732   60896 system_pods.go:59] 8 kube-system pods found
	I0725 13:27:03.621746   60896 system_pods.go:61] "coredns-6d4b75cb6d-htpr6" [ea0b0f7f-8b0a-4385-b505-e3122fe524b0] Running
	I0725 13:27:03.621754   60896 system_pods.go:61] "etcd-embed-certs-20220725132539-44543" [9d01d9cf-2802-46d5-8ca1-7a4e6c619232] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0725 13:27:03.621759   60896 system_pods.go:61] "kube-apiserver-embed-certs-20220725132539-44543" [89aebf00-48c5-4d71-b8c4-ad3faade9c36] Running
	I0725 13:27:03.621763   60896 system_pods.go:61] "kube-controller-manager-embed-certs-20220725132539-44543" [b27f6cdf-dc9b-4c22-820a-434d64ff35d1] Running
	I0725 13:27:03.621767   60896 system_pods.go:61] "kube-proxy-7pjkq" [7e1ad46c-cdbd-4109-956b-3250bf6a1a8e] Running
	I0725 13:27:03.621772   60896 system_pods.go:61] "kube-scheduler-embed-certs-20220725132539-44543" [946e68be-c055-4c90-bd5d-31c53b3534a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 13:27:03.621779   60896 system_pods.go:61] "metrics-server-5c6f97fb75-4xt92" [705f970d-49d5-4a4c-9e18-6da6f236cff5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 13:27:03.621783   60896 system_pods.go:61] "storage-provisioner" [4b92166d-6e5a-4692-b6e4-4269d858e8c3] Running
	I0725 13:27:03.621787   60896 system_pods.go:74] duration metric: took 7.862224ms to wait for pod list to return data ...
	I0725 13:27:03.621793   60896 node_conditions.go:102] verifying NodePressure condition ...
	I0725 13:27:03.624429   60896 node_conditions.go:122] node storage ephemeral capacity is 107077304Ki
	I0725 13:27:03.624445   60896 node_conditions.go:123] node cpu capacity is 6
	I0725 13:27:03.624453   60896 node_conditions.go:105] duration metric: took 2.656612ms to run NodePressure ...
	I0725 13:27:03.624470   60896 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:27:03.765029   60896 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0725 13:27:03.769671   60896 kubeadm.go:777] kubelet initialised
	I0725 13:27:03.769686   60896 kubeadm.go:778] duration metric: took 4.63572ms waiting for restarted kubelet to initialise ...
	I0725 13:27:03.769694   60896 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 13:27:03.774718   60896 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-htpr6" in "kube-system" namespace to be "Ready" ...
	I0725 13:27:03.779434   60896 pod_ready.go:92] pod "coredns-6d4b75cb6d-htpr6" in "kube-system" namespace has status "Ready":"True"
	I0725 13:27:03.779442   60896 pod_ready.go:81] duration metric: took 4.711352ms waiting for pod "coredns-6d4b75cb6d-htpr6" in "kube-system" namespace to be "Ready" ...
	I0725 13:27:03.779448   60896 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-20220725132539-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:27:05.795546   60896 pod_ready.go:102] pod "etcd-embed-certs-20220725132539-44543" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:07.796661   60896 pod_ready.go:102] pod "etcd-embed-certs-20220725132539-44543" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:09.797132   60896 pod_ready.go:102] pod "etcd-embed-certs-20220725132539-44543" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:11.797247   60896 pod_ready.go:102] pod "etcd-embed-certs-20220725132539-44543" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:13.797607   60896 pod_ready.go:92] pod "etcd-embed-certs-20220725132539-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:27:13.797620   60896 pod_ready.go:81] duration metric: took 10.017876261s waiting for pod "etcd-embed-certs-20220725132539-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:27:13.797626   60896 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-20220725132539-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:27:13.801581   60896 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220725132539-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:27:13.801589   60896 pod_ready.go:81] duration metric: took 3.958491ms waiting for pod "kube-apiserver-embed-certs-20220725132539-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:27:13.801594   60896 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-20220725132539-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:27:15.814130   60896 pod_ready.go:102] pod "kube-controller-manager-embed-certs-20220725132539-44543" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:18.313101   60896 pod_ready.go:102] pod "kube-controller-manager-embed-certs-20220725132539-44543" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:18.812898   60896 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220725132539-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:27:18.812911   60896 pod_ready.go:81] duration metric: took 5.011165723s waiting for pod "kube-controller-manager-embed-certs-20220725132539-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:27:18.812917   60896 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7pjkq" in "kube-system" namespace to be "Ready" ...
	I0725 13:27:18.816934   60896 pod_ready.go:92] pod "kube-proxy-7pjkq" in "kube-system" namespace has status "Ready":"True"
	I0725 13:27:18.816941   60896 pod_ready.go:81] duration metric: took 4.020031ms waiting for pod "kube-proxy-7pjkq" in "kube-system" namespace to be "Ready" ...
	I0725 13:27:18.816946   60896 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-20220725132539-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:27:18.820860   60896 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220725132539-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:27:18.820867   60896 pod_ready.go:81] duration metric: took 3.91141ms waiting for pod "kube-scheduler-embed-certs-20220725132539-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:27:18.820873   60896 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace to be "Ready" ...
	I0725 13:27:20.830973   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:22.831222   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:25.330338   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:27.331801   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:29.333198   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:31.833556   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:34.331073   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:36.332608   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:38.333354   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:40.832521   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:42.833818   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:45.331089   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:47.334374   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:49.832653   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:51.834727   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:54.334928   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:27:56.832318   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:00.078228   60183 out.go:204]   - Generating certificates and keys ...
	I0725 13:28:00.141595   60183 out.go:204]   - Booting up control plane ...
	W0725 13:28:00.145489   60183 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0725 13:28:00.145526   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0725 13:28:00.568444   60183 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 13:28:00.578598   60183 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0725 13:28:00.578655   60183 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 13:28:00.586062   60183 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 13:28:00.586084   60183 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0725 13:28:01.303869   60183 out.go:204]   - Generating certificates and keys ...
	I0725 13:27:58.834263   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:01.331255   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:03.332422   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:02.819781   60183 out.go:204]   - Booting up control plane ...
	I0725 13:28:05.334830   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:07.335519   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:09.834439   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:11.835591   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:14.334337   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:16.335031   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:18.835705   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:21.333617   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:23.334016   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:25.833249   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:27.835336   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:30.334081   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:32.335851   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:34.833220   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:36.833578   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:39.336254   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:41.835840   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:44.334110   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:46.833528   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:48.836513   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:50.836788   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:53.336552   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:55.835048   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:28:57.835550   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:00.336892   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:02.836866   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:05.336690   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:07.835667   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:09.837186   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:12.335226   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:14.335725   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:16.336690   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:18.836768   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:21.337269   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:23.837151   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:26.335027   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:28.338729   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:30.837640   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:33.337517   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:35.835459   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:37.836627   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:40.335900   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:42.337529   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:44.337705   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:46.836842   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:49.337159   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:51.838140   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:54.338382   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:56.837922   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:29:57.738590   60183 kubeadm.go:397] StartCluster complete in 7m59.250327249s
	I0725 13:29:57.738667   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 13:29:57.767241   60183 logs.go:274] 0 containers: []
	W0725 13:29:57.767253   60183 logs.go:276] No container was found matching "kube-apiserver"
	I0725 13:29:57.767311   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 13:29:57.795435   60183 logs.go:274] 0 containers: []
	W0725 13:29:57.795448   60183 logs.go:276] No container was found matching "etcd"
	I0725 13:29:57.795503   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 13:29:57.824559   60183 logs.go:274] 0 containers: []
	W0725 13:29:57.824581   60183 logs.go:276] No container was found matching "coredns"
	I0725 13:29:57.824642   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 13:29:57.854900   60183 logs.go:274] 0 containers: []
	W0725 13:29:57.854912   60183 logs.go:276] No container was found matching "kube-scheduler"
	I0725 13:29:57.854967   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 13:29:57.883684   60183 logs.go:274] 0 containers: []
	W0725 13:29:57.883695   60183 logs.go:276] No container was found matching "kube-proxy"
	I0725 13:29:57.883747   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0725 13:29:57.917022   60183 logs.go:274] 0 containers: []
	W0725 13:29:57.917034   60183 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0725 13:29:57.917091   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 13:29:57.948784   60183 logs.go:274] 0 containers: []
	W0725 13:29:57.948800   60183 logs.go:276] No container was found matching "storage-provisioner"
	I0725 13:29:57.948858   60183 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 13:29:57.982242   60183 logs.go:274] 0 containers: []
	W0725 13:29:57.982254   60183 logs.go:276] No container was found matching "kube-controller-manager"
	I0725 13:29:57.982261   60183 logs.go:123] Gathering logs for container status ...
	I0725 13:29:57.982268   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 13:30:00.039435   60183 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.0570932s)
	I0725 13:30:00.039559   60183 logs.go:123] Gathering logs for kubelet ...
	I0725 13:30:00.039566   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 13:30:00.078912   60183 logs.go:123] Gathering logs for dmesg ...
	I0725 13:30:00.078928   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 13:30:00.090262   60183 logs.go:123] Gathering logs for describe nodes ...
	I0725 13:30:00.090278   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 13:30:00.144105   60183 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 13:30:00.144118   60183 logs.go:123] Gathering logs for Docker ...
	I0725 13:30:00.144124   60183 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	W0725 13:30:00.158368   60183 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0725 13:30:00.158385   60183 out.go:239] * 
	W0725 13:30:00.158482   60183 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0725 13:30:00.158497   60183 out.go:239] * 
	W0725 13:30:00.159027   60183 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 13:30:00.221624   60183 out.go:177] 
	W0725 13:30:00.264001   60183 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0725 13:30:00.264127   60183 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0725 13:30:00.264205   60183 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0725 13:30:00.327781   60183 out.go:177] 
	I0725 13:29:59.337892   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:01.834615   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:03.837458   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:05.838380   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:08.337530   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:10.837884   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:12.838423   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:14.839069   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:17.338474   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:19.839227   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:22.338405   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:24.339171   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:26.839399   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:29.337976   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:31.839581   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:34.339838   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:36.839475   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:39.337079   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:41.337712   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:43.339506   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:45.837751   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:47.838863   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:49.840092   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:52.338270   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:54.340101   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:56.836913   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:30:59.339951   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:31:01.839593   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:31:04.338000   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:31:06.340092   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:31:08.840337   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:31:11.338636   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:31:13.339284   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:31:15.340719   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:31:17.341394   60896 pod_ready.go:102] pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace has status "Ready":"False"
	I0725 13:31:18.834086   60896 pod_ready.go:81] duration metric: took 4m0.006215554s waiting for pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace to be "Ready" ...
	E0725 13:31:18.834110   60896 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-5c6f97fb75-4xt92" in "kube-system" namespace to be "Ready" (will not retry!)
	I0725 13:31:18.834130   60896 pod_ready.go:38] duration metric: took 4m15.057006558s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 13:31:18.834170   60896 kubeadm.go:630] restartCluster took 4m25.097071028s
	W0725 13:31:18.834308   60896 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0725 13:31:18.834336   60896 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0725 13:31:21.172068   60896 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (2.337649089s)
	I0725 13:31:21.172128   60896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 13:31:21.181652   60896 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 13:31:21.189594   60896 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0725 13:31:21.189647   60896 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 13:31:21.197391   60896 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 13:31:21.197416   60896 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0725 13:31:21.478874   60896 out.go:204]   - Generating certificates and keys ...
	I0725 13:31:22.599968   60896 out.go:204]   - Booting up control plane ...
	I0725 13:31:30.133934   60896 out.go:204]   - Configuring RBAC rules ...
	I0725 13:31:30.524739   60896 cni.go:95] Creating CNI manager for ""
	I0725 13:31:30.524751   60896 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 13:31:30.524767   60896 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 13:31:30.524868   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:30.524894   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=a5b59bcfc16aadb787d3d4f0635e06172b98dce6 minikube.k8s.io/name=embed-certs-20220725132539-44543 minikube.k8s.io/updated_at=2022_07_25T13_31_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:30.637807   60896 ops.go:34] apiserver oom_adj: -16
	I0725 13:31:30.637819   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:31.207665   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:31.708629   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:32.207669   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:32.707725   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:33.206619   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:33.708741   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:34.207204   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:34.707441   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:35.208733   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:35.707874   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:36.207589   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:36.707809   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:37.207740   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:37.707370   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:38.207308   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:38.706826   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:39.206881   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:39.708858   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:40.208450   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:40.706969   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:41.206867   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:41.706953   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:42.206938   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:42.706934   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:43.206961   60896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:31:43.260786   60896 kubeadm.go:1045] duration metric: took 12.735613106s to wait for elevateKubeSystemPrivileges.
	I0725 13:31:43.260807   60896 kubeadm.go:397] StartCluster complete in 4m49.559741772s
	I0725 13:31:43.260831   60896 settings.go:142] acquiring lock: {Name:mk9b5e011e5806157f9a122f8c65a6cc16a3d9e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 13:31:43.260918   60896 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	I0725 13:31:43.261674   60896 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig: {Name:mk33e723b045d7cbb2c524ed31a7e78a4a0b6415 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 13:31:43.789723   60896 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220725132539-44543" rescaled to 1
	I0725 13:31:43.789759   60896 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 13:31:43.789779   60896 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0725 13:31:43.789789   60896 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0725 13:31:43.789846   60896 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220725132539-44543"
	I0725 13:31:43.789851   60896 addons.go:65] Setting metrics-server=true in profile "embed-certs-20220725132539-44543"
	I0725 13:31:43.832406   60896 addons.go:153] Setting addon metrics-server=true in "embed-certs-20220725132539-44543"
	I0725 13:31:43.789856   60896 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220725132539-44543"
	W0725 13:31:43.832420   60896 addons.go:162] addon metrics-server should already be in state true
	I0725 13:31:43.832423   60896 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220725132539-44543"
	W0725 13:31:43.832430   60896 addons.go:162] addon storage-provisioner should already be in state true
	I0725 13:31:43.832435   60896 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220725132539-44543"
	I0725 13:31:43.789884   60896 addons.go:65] Setting dashboard=true in profile "embed-certs-20220725132539-44543"
	I0725 13:31:43.832465   60896 addons.go:153] Setting addon dashboard=true in "embed-certs-20220725132539-44543"
	I0725 13:31:43.789964   60896 config.go:178] Loaded profile config "embed-certs-20220725132539-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	W0725 13:31:43.832478   60896 addons.go:162] addon dashboard should already be in state true
	I0725 13:31:43.832481   60896 host.go:66] Checking if "embed-certs-20220725132539-44543" exists ...
	I0725 13:31:43.832350   60896 out.go:177] * Verifying Kubernetes components...
	I0725 13:31:43.832523   60896 host.go:66] Checking if "embed-certs-20220725132539-44543" exists ...
	I0725 13:31:43.832514   60896 host.go:66] Checking if "embed-certs-20220725132539-44543" exists ...
	I0725 13:31:43.832730   60896 cli_runner.go:164] Run: docker container inspect embed-certs-20220725132539-44543 --format={{.State.Status}}
	I0725 13:31:43.832863   60896 cli_runner.go:164] Run: docker container inspect embed-certs-20220725132539-44543 --format={{.State.Status}}
	I0725 13:31:43.853502   60896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 13:31:43.857670   60896 cli_runner.go:164] Run: docker container inspect embed-certs-20220725132539-44543 --format={{.State.Status}}
	I0725 13:31:43.857757   60896 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0725 13:31:43.857763   60896 cli_runner.go:164] Run: docker container inspect embed-certs-20220725132539-44543 --format={{.State.Status}}
	I0725 13:31:43.886652   60896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220725132539-44543
	I0725 13:31:43.981533   60896 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220725132539-44543"
	I0725 13:31:43.995633   60896 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0725 13:31:44.032544   60896 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0725 13:31:44.032555   60896 addons.go:162] addon default-storageclass should already be in state true
	I0725 13:31:44.032587   60896 host.go:66] Checking if "embed-certs-20220725132539-44543" exists ...
	I0725 13:31:44.128468   60896 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0725 13:31:44.069777   60896 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0725 13:31:44.088256   60896 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220725132539-44543" to be "Ready" ...
	I0725 13:31:44.107700   60896 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 13:31:44.129018   60896 cli_runner.go:164] Run: docker container inspect embed-certs-20220725132539-44543 --format={{.State.Status}}
	I0725 13:31:44.149686   60896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 13:31:44.149713   60896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0725 13:31:44.171575   60896 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0725 13:31:44.149906   60896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725132539-44543
	I0725 13:31:44.149958   60896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725132539-44543
	I0725 13:31:44.192570   60896 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0725 13:31:44.192596   60896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0725 13:31:44.192681   60896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725132539-44543
	I0725 13:31:44.196642   60896 node_ready.go:49] node "embed-certs-20220725132539-44543" has status "Ready":"True"
	I0725 13:31:44.196673   60896 node_ready.go:38] duration metric: took 47.039384ms waiting for node "embed-certs-20220725132539-44543" to be "Ready" ...
	I0725 13:31:44.196683   60896 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 13:31:44.205428   60896 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-4zn7p" in "kube-system" namespace to be "Ready" ...
	I0725 13:31:44.251652   60896 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 13:31:44.251671   60896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 13:31:44.251763   60896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220725132539-44543
	I0725 13:31:44.278968   60896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59422 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/embed-certs-20220725132539-44543/id_rsa Username:docker}
	I0725 13:31:44.282079   60896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59422 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/embed-certs-20220725132539-44543/id_rsa Username:docker}
	I0725 13:31:44.297541   60896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59422 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/embed-certs-20220725132539-44543/id_rsa Username:docker}
	I0725 13:31:44.341814   60896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59422 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/embed-certs-20220725132539-44543/id_rsa Username:docker}
	I0725 13:31:44.480595   60896 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0725 13:31:44.480614   60896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0725 13:31:44.484668   60896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 13:31:44.492992   60896 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0725 13:31:44.493008   60896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0725 13:31:44.502662   60896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 13:31:44.505187   60896 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0725 13:31:44.505200   60896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0725 13:31:44.593413   60896 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0725 13:31:44.593434   60896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0725 13:31:44.680648   60896 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 13:31:44.680675   60896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0725 13:31:44.692890   60896 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0725 13:31:44.692904   60896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0725 13:31:44.789191   60896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 13:31:44.804845   60896 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0725 13:31:44.804859   60896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0725 13:31:44.885209   60896 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0725 13:31:44.885223   60896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0725 13:31:44.904020   60896 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0725 13:31:44.904035   60896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0725 13:31:44.998800   60896 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0725 13:31:44.998819   60896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0725 13:31:45.094281   60896 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0725 13:31:45.094297   60896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0725 13:31:45.114673   60896 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0725 13:31:45.114691   60896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0725 13:31:45.202664   60896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0725 13:31:45.320349   60896 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.462520601s)
	I0725 13:31:45.320371   60896 start.go:809] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0725 13:31:45.486719   60896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.001991871s)
	I0725 13:31:45.628322   60896 addons.go:383] Verifying addon metrics-server=true in "embed-certs-20220725132539-44543"
	I0725 13:31:45.926821   60896 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0725 13:31:45.962589   60896 addons.go:414] enableAddons completed in 2.17271439s
	I0725 13:31:46.222098   60896 pod_ready.go:102] pod "coredns-6d4b75cb6d-4zn7p" in "kube-system" namespace has status "Ready":"False"
	I0725 13:31:48.726349   60896 pod_ready.go:102] pod "coredns-6d4b75cb6d-4zn7p" in "kube-system" namespace has status "Ready":"False"
	I0725 13:31:49.231974   60896 pod_ready.go:92] pod "coredns-6d4b75cb6d-4zn7p" in "kube-system" namespace has status "Ready":"True"
	I0725 13:31:49.231992   60896 pod_ready.go:81] duration metric: took 5.026392872s waiting for pod "coredns-6d4b75cb6d-4zn7p" in "kube-system" namespace to be "Ready" ...
	I0725 13:31:49.232000   60896 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-m6mqs" in "kube-system" namespace to be "Ready" ...
	I0725 13:31:49.241054   60896 pod_ready.go:92] pod "coredns-6d4b75cb6d-m6mqs" in "kube-system" namespace has status "Ready":"True"
	I0725 13:31:49.241066   60896 pod_ready.go:81] duration metric: took 9.060472ms waiting for pod "coredns-6d4b75cb6d-m6mqs" in "kube-system" namespace to be "Ready" ...
	I0725 13:31:49.241072   60896 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-20220725132539-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:31:49.278239   60896 pod_ready.go:92] pod "etcd-embed-certs-20220725132539-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:31:49.278258   60896 pod_ready.go:81] duration metric: took 37.176858ms waiting for pod "etcd-embed-certs-20220725132539-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:31:49.278274   60896 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-20220725132539-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:31:49.288381   60896 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220725132539-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:31:49.288396   60896 pod_ready.go:81] duration metric: took 10.109871ms waiting for pod "kube-apiserver-embed-certs-20220725132539-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:31:49.288407   60896 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-20220725132539-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:31:49.296826   60896 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220725132539-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:31:49.296836   60896 pod_ready.go:81] duration metric: took 8.420236ms waiting for pod "kube-controller-manager-embed-certs-20220725132539-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:31:49.296843   60896 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qvlv7" in "kube-system" namespace to be "Ready" ...
	I0725 13:31:49.620355   60896 pod_ready.go:92] pod "kube-proxy-qvlv7" in "kube-system" namespace has status "Ready":"True"
	I0725 13:31:49.620366   60896 pod_ready.go:81] duration metric: took 323.508913ms waiting for pod "kube-proxy-qvlv7" in "kube-system" namespace to be "Ready" ...
	I0725 13:31:49.620373   60896 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-20220725132539-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:31:50.021783   60896 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220725132539-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:31:50.021794   60896 pod_ready.go:81] duration metric: took 401.404849ms waiting for pod "kube-scheduler-embed-certs-20220725132539-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:31:50.021801   60896 pod_ready.go:38] duration metric: took 5.824934979s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 13:31:50.021817   60896 api_server.go:51] waiting for apiserver process to appear ...
	I0725 13:31:50.021876   60896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:31:50.033076   60896 api_server.go:71] duration metric: took 6.243114213s to wait for apiserver process to appear ...
	I0725 13:31:50.033091   60896 api_server.go:87] waiting for apiserver healthz status ...
	I0725 13:31:50.033098   60896 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59426/healthz ...
	I0725 13:31:50.038511   60896 api_server.go:266] https://127.0.0.1:59426/healthz returned 200:
	ok
	I0725 13:31:50.039598   60896 api_server.go:140] control plane version: v1.24.2
	I0725 13:31:50.039608   60896 api_server.go:130] duration metric: took 6.512407ms to wait for apiserver health ...
	I0725 13:31:50.039612   60896 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 13:31:50.223624   60896 system_pods.go:59] 9 kube-system pods found
	I0725 13:31:50.223637   60896 system_pods.go:61] "coredns-6d4b75cb6d-4zn7p" [55f1e344-5f0b-4e26-a0da-3e1d79f6f1d7] Running
	I0725 13:31:50.223641   60896 system_pods.go:61] "coredns-6d4b75cb6d-m6mqs" [6287a319-fd89-45e2-a0aa-615a41b7ba03] Running
	I0725 13:31:50.223644   60896 system_pods.go:61] "etcd-embed-certs-20220725132539-44543" [8379e39b-c68c-4c21-ac53-1d3ad380104c] Running
	I0725 13:31:50.223648   60896 system_pods.go:61] "kube-apiserver-embed-certs-20220725132539-44543" [65bb168c-bc95-456d-8ab4-df3d903c5c84] Running
	I0725 13:31:50.223651   60896 system_pods.go:61] "kube-controller-manager-embed-certs-20220725132539-44543" [1e433951-6dfb-4f38-9cc7-008f5e50554f] Running
	I0725 13:31:50.223654   60896 system_pods.go:61] "kube-proxy-qvlv7" [92133973-6c32-49a0-910f-93bfa25bcdd1] Running
	I0725 13:31:50.223659   60896 system_pods.go:61] "kube-scheduler-embed-certs-20220725132539-44543" [cd2956ef-a5fc-49ce-ba4f-f53046e750f2] Running
	I0725 13:31:50.223664   60896 system_pods.go:61] "metrics-server-5c6f97fb75-5w696" [5c519070-80ae-4ef5-b8f4-5927ba8fc676] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 13:31:50.223669   60896 system_pods.go:61] "storage-provisioner" [d437bb18-cd5f-4ddf-ab39-8b4ab7e477ca] Running
	I0725 13:31:50.223673   60896 system_pods.go:74] duration metric: took 184.051632ms to wait for pod list to return data ...
	I0725 13:31:50.223678   60896 default_sa.go:34] waiting for default service account to be created ...
	I0725 13:31:50.421586   60896 default_sa.go:45] found service account: "default"
	I0725 13:31:50.421598   60896 default_sa.go:55] duration metric: took 197.910754ms for default service account to be created ...
	I0725 13:31:50.421604   60896 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 13:31:50.623670   60896 system_pods.go:86] 9 kube-system pods found
	I0725 13:31:50.623684   60896 system_pods.go:89] "coredns-6d4b75cb6d-4zn7p" [55f1e344-5f0b-4e26-a0da-3e1d79f6f1d7] Running
	I0725 13:31:50.623691   60896 system_pods.go:89] "coredns-6d4b75cb6d-m6mqs" [6287a319-fd89-45e2-a0aa-615a41b7ba03] Running
	I0725 13:31:50.623695   60896 system_pods.go:89] "etcd-embed-certs-20220725132539-44543" [8379e39b-c68c-4c21-ac53-1d3ad380104c] Running
	I0725 13:31:50.623698   60896 system_pods.go:89] "kube-apiserver-embed-certs-20220725132539-44543" [65bb168c-bc95-456d-8ab4-df3d903c5c84] Running
	I0725 13:31:50.623702   60896 system_pods.go:89] "kube-controller-manager-embed-certs-20220725132539-44543" [1e433951-6dfb-4f38-9cc7-008f5e50554f] Running
	I0725 13:31:50.623706   60896 system_pods.go:89] "kube-proxy-qvlv7" [92133973-6c32-49a0-910f-93bfa25bcdd1] Running
	I0725 13:31:50.623709   60896 system_pods.go:89] "kube-scheduler-embed-certs-20220725132539-44543" [cd2956ef-a5fc-49ce-ba4f-f53046e750f2] Running
	I0725 13:31:50.623714   60896 system_pods.go:89] "metrics-server-5c6f97fb75-5w696" [5c519070-80ae-4ef5-b8f4-5927ba8fc676] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 13:31:50.623718   60896 system_pods.go:89] "storage-provisioner" [d437bb18-cd5f-4ddf-ab39-8b4ab7e477ca] Running
	I0725 13:31:50.623722   60896 system_pods.go:126] duration metric: took 202.109351ms to wait for k8s-apps to be running ...
	I0725 13:31:50.623730   60896 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 13:31:50.623784   60896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 13:31:50.635149   60896 system_svc.go:56] duration metric: took 11.408107ms WaitForService to wait for kubelet.
	I0725 13:31:50.635164   60896 kubeadm.go:572] duration metric: took 6.845188044s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0725 13:31:50.635187   60896 node_conditions.go:102] verifying NodePressure condition ...
	I0725 13:31:50.820572   60896 node_conditions.go:122] node storage ephemeral capacity is 107077304Ki
	I0725 13:31:50.820586   60896 node_conditions.go:123] node cpu capacity is 6
	I0725 13:31:50.820593   60896 node_conditions.go:105] duration metric: took 185.39801ms to run NodePressure ...
	I0725 13:31:50.820602   60896 start.go:216] waiting for startup goroutines ...
	I0725 13:31:50.852176   60896 start.go:506] kubectl: 1.24.1, cluster: 1.24.2 (minor skew: 0)
	I0725 13:31:50.873434   60896 out.go:177] * Done! kubectl is now configured to use "embed-certs-20220725132539-44543" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Mon 2022-07-25 20:26:49 UTC, end at Mon 2022-07-25 20:32:51 UTC. --
	Jul 25 20:31:20 embed-certs-20220725132539-44543 dockerd[496]: time="2022-07-25T20:31:20.370771919Z" level=info msg="ignoring event" container=71966907b491834781c58b89b9fd8711b6f7a3db447a18c88b2966cf3bcbd9e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:31:20 embed-certs-20220725132539-44543 dockerd[496]: time="2022-07-25T20:31:20.440057133Z" level=info msg="ignoring event" container=0aec9c7d4248eec0e9c77d3c59225d5cb857181c4957ac863c59dbbb21457f69 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:31:20 embed-certs-20220725132539-44543 dockerd[496]: time="2022-07-25T20:31:20.553895228Z" level=info msg="ignoring event" container=c5c9dd312fa561edca255ed23a4b89e20db3e624ca52855b2c361747bdceca46 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:31:20 embed-certs-20220725132539-44543 dockerd[496]: time="2022-07-25T20:31:20.618129752Z" level=info msg="ignoring event" container=375215e177b2c12fb6a0e338fae989f55e17cfd0a56faf75e822bc4480f743d0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:31:20 embed-certs-20220725132539-44543 dockerd[496]: time="2022-07-25T20:31:20.697561986Z" level=info msg="ignoring event" container=253f0f1685d75fb088997970c54a3c821aabde506a3395c8bd0a42f820354d13 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:31:20 embed-certs-20220725132539-44543 dockerd[496]: time="2022-07-25T20:31:20.759162740Z" level=info msg="ignoring event" container=fb77eda7acd0f461d040c83dad2fa08dd76cada6c81a2910aac86fcd80b4cf06 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:31:20 embed-certs-20220725132539-44543 dockerd[496]: time="2022-07-25T20:31:20.848188863Z" level=info msg="ignoring event" container=119bc29f357405ad917f56b17b03815d613e36de424206b29b1623c884fd20bf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:31:46 embed-certs-20220725132539-44543 dockerd[496]: time="2022-07-25T20:31:46.761240108Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 20:31:46 embed-certs-20220725132539-44543 dockerd[496]: time="2022-07-25T20:31:46.761288557Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 20:31:46 embed-certs-20220725132539-44543 dockerd[496]: time="2022-07-25T20:31:46.762299809Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 20:31:47 embed-certs-20220725132539-44543 dockerd[496]: time="2022-07-25T20:31:47.858622031Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Jul 25 20:31:48 embed-certs-20220725132539-44543 dockerd[496]: time="2022-07-25T20:31:48.157834691Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Jul 25 20:31:51 embed-certs-20220725132539-44543 dockerd[496]: time="2022-07-25T20:31:51.050364498Z" level=info msg="ignoring event" container=dcefbd6025133341169fffe76554d1c404eb2da7044d81ce76efb865b7173688 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:31:51 embed-certs-20220725132539-44543 dockerd[496]: time="2022-07-25T20:31:51.253539985Z" level=info msg="ignoring event" container=be78190d2ffa8146fb25278f510eb579b84f69e76e4f04f6cab429921baf5c3e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:31:51 embed-certs-20220725132539-44543 dockerd[496]: time="2022-07-25T20:31:51.635282144Z" level=info msg="ignoring event" container=da4096f15772b88569be3f2f14b83de85b56da7ce549a12c481310415644cafb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:31:51 embed-certs-20220725132539-44543 dockerd[496]: time="2022-07-25T20:31:51.801914213Z" level=warning msg="reference for unknown type: " digest="sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3" remote="docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3"
	Jul 25 20:31:52 embed-certs-20220725132539-44543 dockerd[496]: time="2022-07-25T20:31:52.479879998Z" level=info msg="ignoring event" container=8c59a7a81bd93e9536ad9c24f177947698b86bbb5dbe857ad4c0883824eab73c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:32:01 embed-certs-20220725132539-44543 dockerd[496]: time="2022-07-25T20:32:01.558119564Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 20:32:01 embed-certs-20220725132539-44543 dockerd[496]: time="2022-07-25T20:32:01.558163132Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 20:32:01 embed-certs-20220725132539-44543 dockerd[496]: time="2022-07-25T20:32:01.559195801Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 20:32:10 embed-certs-20220725132539-44543 dockerd[496]: time="2022-07-25T20:32:10.679853127Z" level=info msg="ignoring event" container=74f9b364f42ab5fb7e8406c542d7d5ab232fdf3b1d2a38c262579ac2087de8a9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:32:48 embed-certs-20220725132539-44543 dockerd[496]: time="2022-07-25T20:32:48.958146680Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 20:32:48 embed-certs-20220725132539-44543 dockerd[496]: time="2022-07-25T20:32:48.958173515Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 20:32:48 embed-certs-20220725132539-44543 dockerd[496]: time="2022-07-25T20:32:48.958812262Z" level=info msg="ignoring event" container=4f9bcb96b351d6d9a5e040aed1f3ecb102c360aca3b0f5b0468e4ca7e4f516b2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:32:48 embed-certs-20220725132539-44543 dockerd[496]: time="2022-07-25T20:32:48.959830215Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	4f9bcb96b351d       a90209bb39e3d                                                                                    3 seconds ago        Exited              dashboard-metrics-scraper   3                   3b509a454f260
	5d443504c77d9       kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3   54 seconds ago       Running             kubernetes-dashboard        0                   de5934f2a6b43
	57ad4b4dda5bb       6e38f40d628db                                                                                    About a minute ago   Running             storage-provisioner         0                   58a01c9a94306
	6f8b5d6ba405f       a4ca41631cc7a                                                                                    About a minute ago   Running             coredns                     0                   466dfdc8aa749
	79d93d68c7bc3       a634548d10b03                                                                                    About a minute ago   Running             kube-proxy                  0                   f217baf9b0cd2
	09b5fd9f7b622       5d725196c1f47                                                                                    About a minute ago   Running             kube-scheduler              0                   d4d8e213f4f74
	12e962a6ce008       d3377ffb7177c                                                                                    About a minute ago   Running             kube-apiserver              0                   2135c184c938a
	40e9cee209077       34cdf99b1bb3b                                                                                    About a minute ago   Running             kube-controller-manager     0                   54a4c998cf04f
	807575ee1a9a5       aebe758cef4cd                                                                                    About a minute ago   Running             etcd                        0                   5efc03e311742
	
	* 
	* ==> coredns [6f8b5d6ba405] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20220725132539-44543
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20220725132539-44543
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a5b59bcfc16aadb787d3d4f0635e06172b98dce6
	                    minikube.k8s.io/name=embed-certs-20220725132539-44543
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_07_25T13_31_30_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 25 Jul 2022 20:31:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20220725132539-44543
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 25 Jul 2022 20:32:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 25 Jul 2022 20:32:44 +0000   Mon, 25 Jul 2022 20:31:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 25 Jul 2022 20:32:44 +0000   Mon, 25 Jul 2022 20:31:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 25 Jul 2022 20:32:44 +0000   Mon, 25 Jul 2022 20:31:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 25 Jul 2022 20:32:44 +0000   Mon, 25 Jul 2022 20:31:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-20220725132539-44543
	Capacity:
	  cpu:                6
	  ephemeral-storage:  107077304Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  107077304Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 855c6c72c86b4657b3d8c3c774fd7e1d
	  System UUID:                215a12c7-a2b8-46f8-b13f-1d5e49241600
	  Boot ID:                    f0b8333d-7eba-4eec-b9ed-6046a2426c0d
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.2
	  Kube-Proxy Version:         v1.24.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-4zn7p                                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     68s
	  kube-system                 etcd-embed-certs-20220725132539-44543                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         81s
	  kube-system                 kube-apiserver-embed-certs-20220725132539-44543             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	  kube-system                 kube-controller-manager-embed-certs-20220725132539-44543    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	  kube-system                 kube-proxy-qvlv7                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kube-system                 kube-scheduler-embed-certs-20220725132539-44543             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	  kube-system                 metrics-server-5c6f97fb75-5w696                             100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         66s
	  kube-system                 storage-provisioner                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kubernetes-dashboard        dashboard-metrics-scraper-dffd48c4c-hpdct                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kubernetes-dashboard        kubernetes-dashboard-5fd5574d9f-mp7g8                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 67s                kube-proxy       
	  Normal  Starting                 88s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  88s (x3 over 88s)  kubelet          Node embed-certs-20220725132539-44543 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    88s (x3 over 88s)  kubelet          Node embed-certs-20220725132539-44543 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     88s (x3 over 88s)  kubelet          Node embed-certs-20220725132539-44543 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  88s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 81s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  81s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  81s                kubelet          Node embed-certs-20220725132539-44543 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    81s                kubelet          Node embed-certs-20220725132539-44543 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     81s                kubelet          Node embed-certs-20220725132539-44543 status is now: NodeHasSufficientPID
	  Normal  NodeReady                71s                kubelet          Node embed-certs-20220725132539-44543 status is now: NodeReady
	  Normal  RegisteredNode           69s                node-controller  Node embed-certs-20220725132539-44543 event: Registered Node embed-certs-20220725132539-44543 in Controller
	  Normal  Starting                 7s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s                 kubelet          Node embed-certs-20220725132539-44543 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s                 kubelet          Node embed-certs-20220725132539-44543 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s                 kubelet          Node embed-certs-20220725132539-44543 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7s                 kubelet          Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [807575ee1a9a] <==
	* {"level":"info","ts":"2022-07-25T20:31:24.460Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2022-07-25T20:31:24.461Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2022-07-25T20:31:24.461Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-07-25T20:31:24.461Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-07-25T20:31:24.461Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-07-25T20:31:24.461Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-25T20:31:24.461Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-25T20:31:25.254Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2022-07-25T20:31:25.255Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-07-25T20:31:25.255Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2022-07-25T20:31:25.255Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2022-07-25T20:31:25.255Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-07-25T20:31:25.255Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2022-07-25T20:31:25.255Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-07-25T20:31:25.255Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T20:31:25.255Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:embed-certs-20220725132539-44543 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-25T20:31:25.255Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-25T20:31:25.255Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-25T20:31:25.256Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T20:31:25.256Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T20:31:25.256Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T20:31:25.257Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2022-07-25T20:31:25.257Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-07-25T20:31:25.261Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-25T20:31:25.261Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  20:32:52 up  1:14,  0 users,  load average: 0.55, 0.81, 1.10
	Linux embed-certs-20220725132539-44543 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [12e962a6ce00] <==
	* I0725 20:31:29.415603       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0725 20:31:30.372554       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0725 20:31:30.377894       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0725 20:31:30.387359       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0725 20:31:30.409950       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0725 20:31:42.822982       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0725 20:31:43.020795       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0725 20:31:44.003401       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0725 20:31:45.633303       1 alloc.go:327] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.97.51.201]
	I0725 20:31:45.898413       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.100.183.11]
	I0725 20:31:45.910007       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.97.221.100]
	W0725 20:31:46.539642       1 handler_proxy.go:102] no RequestInfo found in the context
	W0725 20:31:46.539739       1 handler_proxy.go:102] no RequestInfo found in the context
	E0725 20:31:46.539771       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	E0725 20:31:46.539799       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0725 20:31:46.539814       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0725 20:31:46.541260       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0725 20:32:46.499328       1 handler_proxy.go:102] no RequestInfo found in the context
	E0725 20:32:46.499398       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0725 20:32:46.499408       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0725 20:32:46.500569       1 handler_proxy.go:102] no RequestInfo found in the context
	E0725 20:32:46.500678       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0725 20:32:46.500716       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [40e9cee20907] <==
	* I0725 20:31:43.024743       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-qvlv7"
	I0725 20:31:43.274696       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-m6mqs"
	I0725 20:31:43.279391       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-4zn7p"
	I0725 20:31:43.302571       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-6d4b75cb6d to 1"
	E0725 20:31:43.305851       1 replica_set.go:550] sync "kube-system/coredns-6d4b75cb6d" failed with Operation cannot be fulfilled on replicasets.apps "coredns-6d4b75cb6d": the object has been modified; please apply your changes to the latest version and try again
	I0725 20:31:43.308950       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-6d4b75cb6d-m6mqs"
	I0725 20:31:45.523782       1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-5c6f97fb75 to 1"
	I0725 20:31:45.528140       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-5c6f97fb75-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0725 20:31:45.598549       1 replica_set.go:550] sync "kube-system/metrics-server-5c6f97fb75" failed with pods "metrics-server-5c6f97fb75-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0725 20:31:45.603442       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c6f97fb75-5w696"
	I0725 20:31:45.717349       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-dffd48c4c to 1"
	I0725 20:31:45.725413       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-5fd5574d9f to 1"
	I0725 20:31:45.727454       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0725 20:31:45.732590       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0725 20:31:45.734887       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0725 20:31:45.736020       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0725 20:31:45.736080       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0725 20:31:45.739794       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0725 20:31:45.742577       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0725 20:31:45.742739       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0725 20:31:45.798566       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-dffd48c4c-hpdct"
	I0725 20:31:45.802727       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5fd5574d9f-mp7g8"
	W0725 20:31:52.364949       1 endpointslice_controller.go:302] Error syncing endpoint slices for service "kube-system/kube-dns", retrying. Error: EndpointSlice informer cache is out of date
	E0725 20:32:44.392876       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0725 20:32:44.455047       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [79d93d68c7bc] <==
	* I0725 20:31:43.909226       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0725 20:31:43.909368       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0725 20:31:43.909403       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0725 20:31:44.000185       1 server_others.go:206] "Using iptables Proxier"
	I0725 20:31:44.000284       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0725 20:31:44.000293       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0725 20:31:44.000303       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0725 20:31:44.000351       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0725 20:31:44.000529       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0725 20:31:44.000777       1 server.go:661] "Version info" version="v1.24.2"
	I0725 20:31:44.000784       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 20:31:44.001122       1 config.go:317] "Starting service config controller"
	I0725 20:31:44.001135       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0725 20:31:44.001231       1 config.go:226] "Starting endpoint slice config controller"
	I0725 20:31:44.001237       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0725 20:31:44.001411       1 config.go:444] "Starting node config controller"
	I0725 20:31:44.001423       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0725 20:31:44.101851       1 shared_informer.go:262] Caches are synced for node config
	I0725 20:31:44.101892       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0725 20:31:44.101899       1 shared_informer.go:262] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [09b5fd9f7b62] <==
	* W0725 20:31:27.330715       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0725 20:31:27.330748       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0725 20:31:28.165785       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0725 20:31:28.165840       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0725 20:31:28.165785       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0725 20:31:28.165886       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0725 20:31:28.203972       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0725 20:31:28.204009       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0725 20:31:28.245452       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0725 20:31:28.245501       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0725 20:31:28.247590       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0725 20:31:28.247634       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0725 20:31:28.321716       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0725 20:31:28.321767       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0725 20:31:28.357626       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0725 20:31:28.357662       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0725 20:31:28.399651       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0725 20:31:28.399691       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0725 20:31:28.442074       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0725 20:31:28.442089       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0725 20:31:28.454053       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0725 20:31:28.454089       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0725 20:31:28.462184       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0725 20:31:28.462258       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0725 20:31:30.726755       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2022-07-25 20:26:49 UTC, end at Mon 2022-07-25 20:32:53 UTC. --
	Jul 25 20:32:45 embed-certs-20220725132539-44543 kubelet[9870]: I0725 20:32:45.897219    9870 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvxl9\" (UniqueName: \"kubernetes.io/projected/55f1e344-5f0b-4e26-a0da-3e1d79f6f1d7-kube-api-access-gvxl9\") pod \"coredns-6d4b75cb6d-4zn7p\" (UID: \"55f1e344-5f0b-4e26-a0da-3e1d79f6f1d7\") " pod="kube-system/coredns-6d4b75cb6d-4zn7p"
	Jul 25 20:32:45 embed-certs-20220725132539-44543 kubelet[9870]: I0725 20:32:45.897285    9870 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qj7s\" (UniqueName: \"kubernetes.io/projected/5c519070-80ae-4ef5-b8f4-5927ba8fc676-kube-api-access-4qj7s\") pod \"metrics-server-5c6f97fb75-5w696\" (UID: \"5c519070-80ae-4ef5-b8f4-5927ba8fc676\") " pod="kube-system/metrics-server-5c6f97fb75-5w696"
	Jul 25 20:32:45 embed-certs-20220725132539-44543 kubelet[9870]: I0725 20:32:45.897338    9870 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92133973-6c32-49a0-910f-93bfa25bcdd1-lib-modules\") pod \"kube-proxy-qvlv7\" (UID: \"92133973-6c32-49a0-910f-93bfa25bcdd1\") " pod="kube-system/kube-proxy-qvlv7"
	Jul 25 20:32:45 embed-certs-20220725132539-44543 kubelet[9870]: I0725 20:32:45.897385    9870 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kspd6\" (UniqueName: \"kubernetes.io/projected/074796ba-5e47-424d-abea-7c53d2a8083d-kube-api-access-kspd6\") pod \"dashboard-metrics-scraper-dffd48c4c-hpdct\" (UID: \"074796ba-5e47-424d-abea-7c53d2a8083d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-hpdct"
	Jul 25 20:32:45 embed-certs-20220725132539-44543 kubelet[9870]: I0725 20:32:45.897403    9870 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/772f2347-bdb3-4e73-ad0c-92bdc89a2ef2-tmp-volume\") pod \"kubernetes-dashboard-5fd5574d9f-mp7g8\" (UID: \"772f2347-bdb3-4e73-ad0c-92bdc89a2ef2\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-mp7g8"
	Jul 25 20:32:45 embed-certs-20220725132539-44543 kubelet[9870]: I0725 20:32:45.897416    9870 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d437bb18-cd5f-4ddf-ab39-8b4ab7e477ca-tmp\") pod \"storage-provisioner\" (UID: \"d437bb18-cd5f-4ddf-ab39-8b4ab7e477ca\") " pod="kube-system/storage-provisioner"
	Jul 25 20:32:45 embed-certs-20220725132539-44543 kubelet[9870]: I0725 20:32:45.897431    9870 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5c519070-80ae-4ef5-b8f4-5927ba8fc676-tmp-dir\") pod \"metrics-server-5c6f97fb75-5w696\" (UID: \"5c519070-80ae-4ef5-b8f4-5927ba8fc676\") " pod="kube-system/metrics-server-5c6f97fb75-5w696"
	Jul 25 20:32:45 embed-certs-20220725132539-44543 kubelet[9870]: I0725 20:32:45.897446    9870 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v9f4\" (UniqueName: \"kubernetes.io/projected/92133973-6c32-49a0-910f-93bfa25bcdd1-kube-api-access-8v9f4\") pod \"kube-proxy-qvlv7\" (UID: \"92133973-6c32-49a0-910f-93bfa25bcdd1\") " pod="kube-system/kube-proxy-qvlv7"
	Jul 25 20:32:45 embed-certs-20220725132539-44543 kubelet[9870]: I0725 20:32:45.897474    9870 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/55f1e344-5f0b-4e26-a0da-3e1d79f6f1d7-config-volume\") pod \"coredns-6d4b75cb6d-4zn7p\" (UID: \"55f1e344-5f0b-4e26-a0da-3e1d79f6f1d7\") " pod="kube-system/coredns-6d4b75cb6d-4zn7p"
	Jul 25 20:32:45 embed-certs-20220725132539-44543 kubelet[9870]: I0725 20:32:45.897506    9870 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz58m\" (UniqueName: \"kubernetes.io/projected/d437bb18-cd5f-4ddf-ab39-8b4ab7e477ca-kube-api-access-rz58m\") pod \"storage-provisioner\" (UID: \"d437bb18-cd5f-4ddf-ab39-8b4ab7e477ca\") " pod="kube-system/storage-provisioner"
	Jul 25 20:32:45 embed-certs-20220725132539-44543 kubelet[9870]: I0725 20:32:45.897534    9870 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/92133973-6c32-49a0-910f-93bfa25bcdd1-kube-proxy\") pod \"kube-proxy-qvlv7\" (UID: \"92133973-6c32-49a0-910f-93bfa25bcdd1\") " pod="kube-system/kube-proxy-qvlv7"
	Jul 25 20:32:45 embed-certs-20220725132539-44543 kubelet[9870]: I0725 20:32:45.897550    9870 reconciler.go:157] "Reconciler: start to sync state"
	Jul 25 20:32:47 embed-certs-20220725132539-44543 kubelet[9870]: I0725 20:32:47.075832    9870 request.go:601] Waited for 1.176656075s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Jul 25 20:32:47 embed-certs-20220725132539-44543 kubelet[9870]: E0725 20:32:47.082543    9870 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-embed-certs-20220725132539-44543\" already exists" pod="kube-system/kube-scheduler-embed-certs-20220725132539-44543"
	Jul 25 20:32:47 embed-certs-20220725132539-44543 kubelet[9870]: E0725 20:32:47.280079    9870 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-embed-certs-20220725132539-44543\" already exists" pod="kube-system/kube-apiserver-embed-certs-20220725132539-44543"
	Jul 25 20:32:47 embed-certs-20220725132539-44543 kubelet[9870]: E0725 20:32:47.486257    9870 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-embed-certs-20220725132539-44543\" already exists" pod="kube-system/etcd-embed-certs-20220725132539-44543"
	Jul 25 20:32:47 embed-certs-20220725132539-44543 kubelet[9870]: E0725 20:32:47.734289    9870 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-embed-certs-20220725132539-44543\" already exists" pod="kube-system/kube-controller-manager-embed-certs-20220725132539-44543"
	Jul 25 20:32:48 embed-certs-20220725132539-44543 kubelet[9870]: I0725 20:32:48.580582    9870 scope.go:110] "RemoveContainer" containerID="74f9b364f42ab5fb7e8406c542d7d5ab232fdf3b1d2a38c262579ac2087de8a9"
	Jul 25 20:32:48 embed-certs-20220725132539-44543 kubelet[9870]: E0725 20:32:48.960313    9870 remote_image.go:218] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jul 25 20:32:48 embed-certs-20220725132539-44543 kubelet[9870]: E0725 20:32:48.960369    9870 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jul 25 20:32:48 embed-certs-20220725132539-44543 kubelet[9870]: E0725 20:32:48.960505    9870 kuberuntime_manager.go:905] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-4qj7s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeH
andler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices
:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-5c6f97fb75-5w696_kube-system(5c519070-80ae-4ef5-b8f4-5927ba8fc676): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Jul 25 20:32:48 embed-certs-20220725132539-44543 kubelet[9870]: E0725 20:32:48.960567    9870 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host\"" pod="kube-system/metrics-server-5c6f97fb75-5w696" podUID=5c519070-80ae-4ef5-b8f4-5927ba8fc676
	Jul 25 20:32:49 embed-certs-20220725132539-44543 kubelet[9870]: I0725 20:32:49.997652    9870 scope.go:110] "RemoveContainer" containerID="74f9b364f42ab5fb7e8406c542d7d5ab232fdf3b1d2a38c262579ac2087de8a9"
	Jul 25 20:32:49 embed-certs-20220725132539-44543 kubelet[9870]: I0725 20:32:49.997886    9870 scope.go:110] "RemoveContainer" containerID="4f9bcb96b351d6d9a5e040aed1f3ecb102c360aca3b0f5b0468e4ca7e4f516b2"
	Jul 25 20:32:49 embed-certs-20220725132539-44543 kubelet[9870]: E0725 20:32:49.998071    9870 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-dffd48c4c-hpdct_kubernetes-dashboard(074796ba-5e47-424d-abea-7c53d2a8083d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-hpdct" podUID=074796ba-5e47-424d-abea-7c53d2a8083d
	
	* 
	* ==> kubernetes-dashboard [5d443504c77d] <==
	* 2022/07/25 20:31:57 Using namespace: kubernetes-dashboard
	2022/07/25 20:31:57 Using in-cluster config to connect to apiserver
	2022/07/25 20:31:57 Using secret token for csrf signing
	2022/07/25 20:31:57 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/07/25 20:31:57 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/07/25 20:31:57 Successful initial request to the apiserver, version: v1.24.2
	2022/07/25 20:31:57 Generating JWE encryption key
	2022/07/25 20:31:57 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/07/25 20:31:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/07/25 20:31:57 Initializing JWE encryption key from synchronized object
	2022/07/25 20:31:57 Creating in-cluster Sidecar client
	2022/07/25 20:31:57 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/07/25 20:31:57 Serving insecurely on HTTP port: 9090
	2022/07/25 20:32:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/07/25 20:31:57 Starting overwatch
	
	* 
	* ==> storage-provisioner [57ad4b4dda5b] <==
	* I0725 20:31:46.060841       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0725 20:31:46.069555       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0725 20:31:46.069604       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0725 20:31:46.074987       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0725 20:31:46.075111       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-20220725132539-44543_c95feabb-f3f4-4425-9b90-9a6be90a8ff0!
	I0725 20:31:46.075096       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4e781833-7204-4708-960e-a4c71b96d938", APIVersion:"v1", ResourceVersion:"478", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-20220725132539-44543_c95feabb-f3f4-4425-9b90-9a6be90a8ff0 became leader
	I0725 20:31:46.176368       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-20220725132539-44543_c95feabb-f3f4-4425-9b90-9a6be90a8ff0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220725132539-44543 -n embed-certs-20220725132539-44543
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-20220725132539-44543 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-5c6f97fb75-5w696
helpers_test.go:272: ======> post-mortem[TestStartStop/group/embed-certs/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context embed-certs-20220725132539-44543 describe pod metrics-server-5c6f97fb75-5w696
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context embed-certs-20220725132539-44543 describe pod metrics-server-5c6f97fb75-5w696: exit status 1 (315.38885ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5c6f97fb75-5w696" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context embed-certs-20220725132539-44543 describe pod metrics-server-5c6f97fb75-5w696: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (43.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (43.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-different-port-20220725133258-44543 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220725133258-44543 -n default-k8s-different-port-20220725133258-44543
E0725 13:39:20.675607   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/false-20220725125922-44543/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220725133258-44543 -n default-k8s-different-port-20220725133258-44543: exit status 2 (16.112153017s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220725133258-44543 -n default-k8s-different-port-20220725133258-44543

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220725133258-44543 -n default-k8s-different-port-20220725133258-44543: exit status 2 (16.112163249s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-different-port-20220725133258-44543 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220725133258-44543 -n default-k8s-different-port-20220725133258-44543
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220725133258-44543 -n default-k8s-different-port-20220725133258-44543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220725133258-44543
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220725133258-44543:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f636f825644219632d3312e5c418ac3a076af6794540372ef4c57f807ac470f8",
	        "Created": "2022-07-25T20:33:05.177797086Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 295788,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-25T20:34:05.868939191Z",
	            "FinishedAt": "2022-07-25T20:34:03.923396798Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/f636f825644219632d3312e5c418ac3a076af6794540372ef4c57f807ac470f8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f636f825644219632d3312e5c418ac3a076af6794540372ef4c57f807ac470f8/hostname",
	        "HostsPath": "/var/lib/docker/containers/f636f825644219632d3312e5c418ac3a076af6794540372ef4c57f807ac470f8/hosts",
	        "LogPath": "/var/lib/docker/containers/f636f825644219632d3312e5c418ac3a076af6794540372ef4c57f807ac470f8/f636f825644219632d3312e5c418ac3a076af6794540372ef4c57f807ac470f8-json.log",
	        "Name": "/default-k8s-different-port-20220725133258-44543",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220725133258-44543:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220725133258-44543",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/db3b3085e7a87bd97085d70f453700d2778711a2e70b85f11a3488a7a47d9597-init/diff:/var/lib/docker/overlay2/d54b85ebfe416686b2558e2807bd5eef62d8c3964c35bbc22b4965093a7c57f4/diff:/var/lib/docker/overlay2/5cb3ff6e7921802d23fd6cba2490df2c36857ad3d97eedcf6e537d1527a37f60/diff:/var/lib/docker/overlay2/e9074b1b2294cc5a00d3237646a71c4d30d89748eadbae26dd532d6a475c9851/diff:/var/lib/docker/overlay2/df8929c4859ecf2c8807040b0174a3f0dfa8da01532ea961cd7f4f2f0ccfbfc2/diff:/var/lib/docker/overlay2/177329168f9a1043ce94a2b7b32958779e8f685e85abf8e4c335d03d23295b26/diff:/var/lib/docker/overlay2/9eef19133d2d5e0111523234481658c1f73d7552db59842ad9ca687debda8fb3/diff:/var/lib/docker/overlay2/5c3a4cdf915d1e65e3003feefb9b9e649b432658e093e04971d5cb79e610dba9/diff:/var/lib/docker/overlay2/d1aeff9fecf983db994270b4acf2a2310c790c64d42c58d436aec3238f88b36c/diff:/var/lib/docker/overlay2/580dc19745fb3c1352983857ac5a555954893045702e7867e651e15a2d6d8732/diff:/var/lib/docker/overlay2/6c77b3
2028ad3ef74c2faf89b5080529c0391907ee9c1d7bbc00c25a5340238b/diff:/var/lib/docker/overlay2/e9e20374d05015c7a4f19eb8e14e663122cd2aef66d761bedfdef138551230b8/diff:/var/lib/docker/overlay2/cd6d64933d189089a7aaf08d7c7db50b89cc981d03b8272e4fdedb65495c3e4c/diff:/var/lib/docker/overlay2/768244a14ae888bb2a08747182035c24bd2a6a7a67f1ddae7b4e60efccb01339/diff:/var/lib/docker/overlay2/f90a95060678580a1a90ee9a563996249f7606bffcd06b680ef15234b56ab4a6/diff:/var/lib/docker/overlay2/49310159b9be5ac9bfa1dca18d816535ceb12e228963adafcf71f9d7d52d95ec/diff:/var/lib/docker/overlay2/cef5c40c08cddf01fe5ea252f4a28586c0bd1296ddc38fc37fc7e34af83c3c30/diff:/var/lib/docker/overlay2/b31bdcb9b1a69c5230400a4e1f5650c1cc8f9e27e673ff43708f9a450106733e/diff:/var/lib/docker/overlay2/899516301b3ffc21aee7cb9984f97d944fe5af0f2cd0bc5a5895bf187e4bff72/diff:/var/lib/docker/overlay2/f1bd0d2fc2ce458c0b6364dde56610c3d26bf8dc2cca2ae4e34a46f173f44595/diff:/var/lib/docker/overlay2/9f0da14f9c19d5a719554996b34f3ef80a0378921181aaf89ca41339ccc5bee3/diff:/var/lib/d
ocker/overlay2/4d6e8c40f7c65cbcfa827b62273eb37d2a0bb0407ee161b80523f11c157a52d4/diff:/var/lib/docker/overlay2/434355bebe182fb5c97b07ad8cf7a59b5474f8bc71379ff4f9690ffe31837e63/diff:/var/lib/docker/overlay2/128fb199261eb4fea9e79111861223ddb0d84438454cb7c0b9a61cc73d0796c3/diff:/var/lib/docker/overlay2/11c9cb7e5f15ac43115cb739da1135a53e08dc94ee1da27441e1d6c79eb73e62/diff:/var/lib/docker/overlay2/5e6bf225209b025da3db5a2d8862c3a0ee64417fa809d026e010644fc6619b48/diff:/var/lib/docker/overlay2/265a2d12f86abb8a607a06ec6f225186dfe622f7e23cbbf146a2ceffc7bc9bdc/diff:/var/lib/docker/overlay2/e9140b8aea381db0faf833cd2d12ff40267febe63f7c6bc2fa27384dbadf89bc/diff:/var/lib/docker/overlay2/07d7d0ab38cb08c95279e0706a65a6a4aeff8b5e72d559f8bc3dfcc0895654d6/diff:/var/lib/docker/overlay2/3a3d8346fb066863283ad39cf7f43615d7a1e99dc12aa7e8892d85d1c550bc63/diff:/var/lib/docker/overlay2/e5317b212815c832ebd2451ddd6e1f6922f350c5c1651ccfef7c5a8f34b7c1e3/diff:/var/lib/docker/overlay2/3bd7d98f0f0bf7ad2e29344a6ab4ca3211616e390da35077e76565c2074
df852/diff:/var/lib/docker/overlay2/ff63d2045cce1bb51b201079bba9d2b2a666b7331699b9407801b2fc00792bb3/diff:/var/lib/docker/overlay2/545bdf3f77f20e3db68875eda0a8829facb605119a630956ab79323688a3562a/diff:/var/lib/docker/overlay2/8f9b40124ec5ada59184358938542e028ca1e234ef1f6bce1816becf6ec8354e/diff:/var/lib/docker/overlay2/71b919cd2ece4207294cb577adab54fdba87cd9f7fdca1a53d6f376f87f11151/diff:/var/lib/docker/overlay2/b2d4a34fe28bd395d9b4ba0120173fc9df90c36a8a60a309ec088eca8930768c/diff:/var/lib/docker/overlay2/e986f180cbc34391c1ea6665283859b4c3c3061156ea1ff7196ffd869741e591/diff:/var/lib/docker/overlay2/cb98df203d42ea558f11fe84300b687cb65e6f3401b377a4466c9c8ef276ec59/diff:/var/lib/docker/overlay2/6371f9c0528fb1fb4a17bfcf3a34877fdd6d43dca1b5f06aff998d43d2010c46/diff:/var/lib/docker/overlay2/79ceef8e89c4094715c05bafb8c1427d09fb4a99f71f1a6186299303af352c98/diff:/var/lib/docker/overlay2/e034e76eeaff4e68d5158ffe54b55d122124ad82998be242fa08bec3010a0e64/diff",
	                "MergedDir": "/var/lib/docker/overlay2/db3b3085e7a87bd97085d70f453700d2778711a2e70b85f11a3488a7a47d9597/merged",
	                "UpperDir": "/var/lib/docker/overlay2/db3b3085e7a87bd97085d70f453700d2778711a2e70b85f11a3488a7a47d9597/diff",
	                "WorkDir": "/var/lib/docker/overlay2/db3b3085e7a87bd97085d70f453700d2778711a2e70b85f11a3488a7a47d9597/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220725133258-44543",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220725133258-44543/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220725133258-44543",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220725133258-44543",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220725133258-44543",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d959e3ed1a40b9c782cebf8e06ab414586b8721209217d047f2def227f431987",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60201"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60202"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60203"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60204"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60205"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d959e3ed1a40",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220725133258-44543": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "f636f8256442",
	                        "default-k8s-different-port-20220725133258-44543"
	                    ],
	                    "NetworkID": "9853642baa173155947bfa50253682e7a06d75a9d7b826ac454cf6940209077a",
	                    "EndpointID": "3010782d5e19a81c26507062b1803ef0d3c556cdf2a5d7d721c07bb03456f282",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220725133258-44543 -n default-k8s-different-port-20220725133258-44543
helpers_test.go:244: <<< TestStartStop/group/default-k8s-different-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p default-k8s-different-port-20220725133258-44543 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p default-k8s-different-port-20220725133258-44543 logs -n 25: (2.893623851s)
helpers_test.go:252: TestStartStop/group/default-k8s-different-port/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |                     Profile                     |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                | old-k8s-version-20220725131610-44543            | jenkins | v1.26.0 | 25 Jul 22 13:21 PDT |                     |
	|         | old-k8s-version-20220725131610-44543              |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                                 |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                 |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                                 |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                                 |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                                 |         |         |                     |                     |
	| ssh     | -p                                                | no-preload-20220725131741-44543                 | jenkins | v1.26.0 | 25 Jul 22 13:24 PDT | 25 Jul 22 13:24 PDT |
	|         | no-preload-20220725131741-44543                   |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                                 |         |         |                     |                     |
	| pause   | -p                                                | no-preload-20220725131741-44543                 | jenkins | v1.26.0 | 25 Jul 22 13:24 PDT | 25 Jul 22 13:24 PDT |
	|         | no-preload-20220725131741-44543                   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| unpause | -p                                                | no-preload-20220725131741-44543                 | jenkins | v1.26.0 | 25 Jul 22 13:25 PDT | 25 Jul 22 13:25 PDT |
	|         | no-preload-20220725131741-44543                   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                | no-preload-20220725131741-44543                 | jenkins | v1.26.0 | 25 Jul 22 13:25 PDT | 25 Jul 22 13:25 PDT |
	|         | no-preload-20220725131741-44543                   |                                                 |         |         |                     |                     |
	| delete  | -p                                                | no-preload-20220725131741-44543                 | jenkins | v1.26.0 | 25 Jul 22 13:25 PDT | 25 Jul 22 13:25 PDT |
	|         | no-preload-20220725131741-44543                   |                                                 |         |         |                     |                     |
	| start   | -p                                                | embed-certs-20220725132539-44543                | jenkins | v1.26.0 | 25 Jul 22 13:25 PDT | 25 Jul 22 13:26 PDT |
	|         | embed-certs-20220725132539-44543                  |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                                 |         |         |                     |                     |
	|         | --driver=docker                                   |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | embed-certs-20220725132539-44543                | jenkins | v1.26.0 | 25 Jul 22 13:26 PDT | 25 Jul 22 13:26 PDT |
	|         | embed-certs-20220725132539-44543                  |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                     |                     |
	| stop    | -p                                                | embed-certs-20220725132539-44543                | jenkins | v1.26.0 | 25 Jul 22 13:26 PDT | 25 Jul 22 13:26 PDT |
	|         | embed-certs-20220725132539-44543                  |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                               | embed-certs-20220725132539-44543                | jenkins | v1.26.0 | 25 Jul 22 13:26 PDT | 25 Jul 22 13:26 PDT |
	|         | embed-certs-20220725132539-44543                  |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                     |                     |
	| start   | -p                                                | embed-certs-20220725132539-44543                | jenkins | v1.26.0 | 25 Jul 22 13:26 PDT | 25 Jul 22 13:31 PDT |
	|         | embed-certs-20220725132539-44543                  |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                                 |         |         |                     |                     |
	|         | --driver=docker                                   |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |                                                 |         |         |                     |                     |
	| ssh     | -p                                                | embed-certs-20220725132539-44543                | jenkins | v1.26.0 | 25 Jul 22 13:32 PDT | 25 Jul 22 13:32 PDT |
	|         | embed-certs-20220725132539-44543                  |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                                 |         |         |                     |                     |
	| pause   | -p                                                | embed-certs-20220725132539-44543                | jenkins | v1.26.0 | 25 Jul 22 13:32 PDT | 25 Jul 22 13:32 PDT |
	|         | embed-certs-20220725132539-44543                  |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| unpause | -p                                                | embed-certs-20220725132539-44543                | jenkins | v1.26.0 | 25 Jul 22 13:32 PDT | 25 Jul 22 13:32 PDT |
	|         | embed-certs-20220725132539-44543                  |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                | embed-certs-20220725132539-44543                | jenkins | v1.26.0 | 25 Jul 22 13:32 PDT | 25 Jul 22 13:32 PDT |
	|         | embed-certs-20220725132539-44543                  |                                                 |         |         |                     |                     |
	| delete  | -p                                                | embed-certs-20220725132539-44543                | jenkins | v1.26.0 | 25 Jul 22 13:32 PDT | 25 Jul 22 13:32 PDT |
	|         | embed-certs-20220725132539-44543                  |                                                 |         |         |                     |                     |
	| delete  | -p                                                | disable-driver-mounts-20220725133257-44543      | jenkins | v1.26.0 | 25 Jul 22 13:32 PDT | 25 Jul 22 13:32 PDT |
	|         | disable-driver-mounts-20220725133257-44543        |                                                 |         |         |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:32 PDT | 25 Jul 22 13:33 PDT |
	|         | default-k8s-different-port-20220725133258-44543   |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:33 PDT | 25 Jul 22 13:33 PDT |
	|         | default-k8s-different-port-20220725133258-44543   |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                     |                     |
	| stop    | -p                                                | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:33 PDT | 25 Jul 22 13:34 PDT |
	|         | default-k8s-different-port-20220725133258-44543   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:34 PDT | 25 Jul 22 13:34 PDT |
	|         | default-k8s-different-port-20220725133258-44543   |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:34 PDT | 25 Jul 22 13:39 PDT |
	|         | default-k8s-different-port-20220725133258-44543   |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |                                                 |         |         |                     |                     |
	| ssh     | -p                                                | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:39 PDT | 25 Jul 22 13:39 PDT |
	|         | default-k8s-different-port-20220725133258-44543   |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                                 |         |         |                     |                     |
	| pause   | -p                                                | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:39 PDT | 25 Jul 22 13:39 PDT |
	|         | default-k8s-different-port-20220725133258-44543   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| unpause | -p                                                | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:39 PDT | 25 Jul 22 13:39 PDT |
	|         | default-k8s-different-port-20220725133258-44543   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/25 13:34:04
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 13:34:04.627172   61786 out.go:296] Setting OutFile to fd 1 ...
	I0725 13:34:04.627387   61786 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 13:34:04.627392   61786 out.go:309] Setting ErrFile to fd 2...
	I0725 13:34:04.627399   61786 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 13:34:04.627522   61786 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/bin
	I0725 13:34:04.628000   61786 out.go:303] Setting JSON to false
	I0725 13:34:04.642819   61786 start.go:115] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":16416,"bootTime":1658764828,"procs":355,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0725 13:34:04.642925   61786 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0725 13:34:04.664640   61786 out.go:177] * [default-k8s-different-port-20220725133258-44543] minikube v1.26.0 on Darwin 12.4
	I0725 13:34:04.706811   61786 notify.go:193] Checking for updates...
	I0725 13:34:04.728632   61786 out.go:177]   - MINIKUBE_LOCATION=14555
	I0725 13:34:04.750439   61786 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	I0725 13:34:04.771702   61786 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 13:34:04.793905   61786 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 13:34:04.815725   61786 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube
	I0725 13:34:04.838399   61786 config.go:178] Loaded profile config "default-k8s-different-port-20220725133258-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0725 13:34:04.839023   61786 driver.go:365] Setting default libvirt URI to qemu:///system
	I0725 13:34:04.910460   61786 docker.go:137] docker version: linux-20.10.17
	I0725 13:34:04.910592   61786 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 13:34:05.043298   61786 info.go:265] docker info: {ID:DEPZ:X5EQ:JUVI:7TAY:IXPP:M3QT:KGJK:NK3N:YSWM:HAHS:YLUF:RBE6 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 20:34:04.96917702 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 13:34:05.086988   61786 out.go:177] * Using the docker driver based on existing profile
	I0725 13:34:05.107975   61786 start.go:284] selected driver: docker
	I0725 13:34:05.108005   61786 start.go:808] validating driver "docker" against &{Name:default-k8s-different-port-20220725133258-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port
-20220725133258-44543 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:tru
e] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 13:34:05.108159   61786 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 13:34:05.111649   61786 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 13:34:05.244365   61786 info.go:265] docker info: {ID:DEPZ:X5EQ:JUVI:7TAY:IXPP:M3QT:KGJK:NK3N:YSWM:HAHS:YLUF:RBE6 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 20:34:05.170585413 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 13:34:05.244504   61786 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 13:34:05.244520   61786 cni.go:95] Creating CNI manager for ""
	I0725 13:34:05.244529   61786 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 13:34:05.244542   61786 start_flags.go:310] config:
	{Name:default-k8s-different-port-20220725133258-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port-20220725133258-44543 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Netw
ork: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 13:34:05.286913   61786 out.go:177] * Starting control plane node default-k8s-different-port-20220725133258-44543 in cluster default-k8s-different-port-20220725133258-44543
	I0725 13:34:05.308135   61786 cache.go:120] Beginning downloading kic base image for docker with docker
	I0725 13:34:05.329129   61786 out.go:177] * Pulling base image ...
	I0725 13:34:05.350052   61786 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0725 13:34:05.350055   61786 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0725 13:34:05.350147   61786 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4
	I0725 13:34:05.350159   61786 cache.go:57] Caching tarball of preloaded images
	I0725 13:34:05.350324   61786 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0725 13:34:05.350349   61786 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.2 on docker
	I0725 13:34:05.351198   61786 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/default-k8s-different-port-20220725133258-44543/config.json ...
	I0725 13:34:05.414334   61786 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0725 13:34:05.414359   61786 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0725 13:34:05.414371   61786 cache.go:208] Successfully downloaded all kic artifacts
	I0725 13:34:05.414420   61786 start.go:370] acquiring machines lock for default-k8s-different-port-20220725133258-44543: {Name:mk82259bc75cbca30138642157acc7c9a727ddb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 13:34:05.414495   61786 start.go:374] acquired machines lock for "default-k8s-different-port-20220725133258-44543" in 57.072µs
	I0725 13:34:05.414516   61786 start.go:95] Skipping create...Using existing machine configuration
	I0725 13:34:05.414526   61786 fix.go:55] fixHost starting: 
	I0725 13:34:05.414780   61786 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220725133258-44543 --format={{.State.Status}}
	I0725 13:34:05.481920   61786 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220725133258-44543: state=Stopped err=<nil>
	W0725 13:34:05.481949   61786 fix.go:129] unexpected machine state, will restart: <nil>
	I0725 13:34:05.504106   61786 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220725133258-44543" ...
	I0725 13:34:05.525512   61786 cli_runner.go:164] Run: docker start default-k8s-different-port-20220725133258-44543
	I0725 13:34:05.876454   61786 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220725133258-44543 --format={{.State.Status}}
	I0725 13:34:05.950069   61786 kic.go:415] container "default-k8s-different-port-20220725133258-44543" state is running.
	I0725 13:34:05.950674   61786 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220725133258-44543
	I0725 13:34:06.030858   61786 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/default-k8s-different-port-20220725133258-44543/config.json ...
	I0725 13:34:06.031375   61786 machine.go:88] provisioning docker machine ...
	I0725 13:34:06.031401   61786 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220725133258-44543"
	I0725 13:34:06.031482   61786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725133258-44543
	I0725 13:34:06.112519   61786 main.go:134] libmachine: Using SSH client type: native
	I0725 13:34:06.112732   61786 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 60201 <nil> <nil>}
	I0725 13:34:06.112746   61786 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220725133258-44543 && echo "default-k8s-different-port-20220725133258-44543" | sudo tee /etc/hostname
	I0725 13:34:06.239955   61786 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220725133258-44543
	
	I0725 13:34:06.240048   61786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725133258-44543
	I0725 13:34:06.314814   61786 main.go:134] libmachine: Using SSH client type: native
	I0725 13:34:06.314971   61786 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 60201 <nil> <nil>}
	I0725 13:34:06.314987   61786 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220725133258-44543' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220725133258-44543/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220725133258-44543' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 13:34:06.435146   61786 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 13:34:06.435164   61786 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube}
	I0725 13:34:06.435185   61786 ubuntu.go:177] setting up certificates
	I0725 13:34:06.435210   61786 provision.go:83] configureAuth start
	I0725 13:34:06.435282   61786 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220725133258-44543
	I0725 13:34:06.510139   61786 provision.go:138] copyHostCerts
	I0725 13:34:06.510295   61786 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem, removing ...
	I0725 13:34:06.510304   61786 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem
	I0725 13:34:06.510390   61786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem (1082 bytes)
	I0725 13:34:06.510624   61786 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem, removing ...
	I0725 13:34:06.510637   61786 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem
	I0725 13:34:06.510694   61786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem (1123 bytes)
	I0725 13:34:06.510842   61786 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem, removing ...
	I0725 13:34:06.510848   61786 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem
	I0725 13:34:06.510906   61786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem (1675 bytes)
	I0725 13:34:06.511027   61786 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220725133258-44543 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220725133258-44543]
	I0725 13:34:06.640290   61786 provision.go:172] copyRemoteCerts
	I0725 13:34:06.640354   61786 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 13:34:06.640397   61786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725133258-44543
	I0725 13:34:06.714183   61786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60201 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/default-k8s-different-port-20220725133258-44543/id_rsa Username:docker}
	I0725 13:34:06.800565   61786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0725 13:34:06.817495   61786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 13:34:06.835492   61786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0725 13:34:06.851531   61786 provision.go:86] duration metric: configureAuth took 416.13686ms
	I0725 13:34:06.851544   61786 ubuntu.go:193] setting minikube options for container-runtime
	I0725 13:34:06.851704   61786 config.go:178] Loaded profile config "default-k8s-different-port-20220725133258-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0725 13:34:06.851763   61786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725133258-44543
	I0725 13:34:06.922644   61786 main.go:134] libmachine: Using SSH client type: native
	I0725 13:34:06.922819   61786 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 60201 <nil> <nil>}
	I0725 13:34:06.922832   61786 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 13:34:07.045838   61786 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0725 13:34:07.045853   61786 ubuntu.go:71] root file system type: overlay
	I0725 13:34:07.046003   61786 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 13:34:07.046082   61786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725133258-44543
	I0725 13:34:07.116918   61786 main.go:134] libmachine: Using SSH client type: native
	I0725 13:34:07.117160   61786 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 60201 <nil> <nil>}
	I0725 13:34:07.117211   61786 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 13:34:07.249188   61786 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 13:34:07.249277   61786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725133258-44543
	I0725 13:34:07.319965   61786 main.go:134] libmachine: Using SSH client type: native
	I0725 13:34:07.320101   61786 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 60201 <nil> <nil>}
	I0725 13:34:07.320113   61786 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 13:34:07.446161   61786 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 13:34:07.446178   61786 machine.go:91] provisioned docker machine in 1.414225697s
	I0725 13:34:07.446188   61786 start.go:307] post-start starting for "default-k8s-different-port-20220725133258-44543" (driver="docker")
	I0725 13:34:07.446194   61786 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 13:34:07.446265   61786 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 13:34:07.446311   61786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725133258-44543
	I0725 13:34:07.517500   61786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60201 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/default-k8s-different-port-20220725133258-44543/id_rsa Username:docker}
	I0725 13:34:07.603110   61786 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 13:34:07.606519   61786 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0725 13:34:07.606534   61786 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0725 13:34:07.606542   61786 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0725 13:34:07.606551   61786 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0725 13:34:07.606561   61786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/addons for local assets ...
	I0725 13:34:07.606663   61786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files for local assets ...
	I0725 13:34:07.606798   61786 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem -> 445432.pem in /etc/ssl/certs
	I0725 13:34:07.606947   61786 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 13:34:07.613740   61786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem --> /etc/ssl/certs/445432.pem (1708 bytes)
	I0725 13:34:07.629891   61786 start.go:310] post-start completed in 183.624484ms
	I0725 13:34:07.629958   61786 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 13:34:07.630015   61786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725133258-44543
	I0725 13:34:07.700658   61786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60201 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/default-k8s-different-port-20220725133258-44543/id_rsa Username:docker}
	I0725 13:34:07.785856   61786 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0725 13:34:07.790469   61786 fix.go:57] fixHost completed within 2.37498005s
	I0725 13:34:07.790481   61786 start.go:82] releasing machines lock for "default-k8s-different-port-20220725133258-44543", held for 2.375014977s
	I0725 13:34:07.790547   61786 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220725133258-44543
	I0725 13:34:07.861116   61786 ssh_runner.go:195] Run: systemctl --version
	I0725 13:34:07.861126   61786 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0725 13:34:07.861183   61786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725133258-44543
	I0725 13:34:07.861199   61786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725133258-44543
	I0725 13:34:07.938182   61786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60201 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/default-k8s-different-port-20220725133258-44543/id_rsa Username:docker}
	I0725 13:34:07.940737   61786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60201 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/default-k8s-different-port-20220725133258-44543/id_rsa Username:docker}
	I0725 13:34:08.241696   61786 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 13:34:08.251517   61786 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0725 13:34:08.251594   61786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 13:34:08.264323   61786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 13:34:08.277213   61786 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 13:34:08.340273   61786 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 13:34:08.415371   61786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 13:34:08.483465   61786 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 13:34:08.713080   61786 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0725 13:34:08.784666   61786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 13:34:08.854074   61786 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0725 13:34:08.863345   61786 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0725 13:34:08.863409   61786 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0725 13:34:08.867141   61786 start.go:471] Will wait 60s for crictl version
	I0725 13:34:08.867182   61786 ssh_runner.go:195] Run: sudo crictl version
	I0725 13:34:08.968151   61786 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0725 13:34:08.968217   61786 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 13:34:09.002469   61786 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 13:34:09.063979   61786 out.go:204] * Preparing Kubernetes v1.24.2 on Docker 20.10.17 ...
	I0725 13:34:09.064085   61786 cli_runner.go:164] Run: docker exec -t default-k8s-different-port-20220725133258-44543 dig +short host.docker.internal
	I0725 13:34:09.191610   61786 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0725 13:34:09.191718   61786 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0725 13:34:09.195961   61786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 13:34:09.205544   61786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220725133258-44543
	I0725 13:34:09.276048   61786 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0725 13:34:09.276131   61786 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 13:34:09.305942   61786 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.2
	k8s.gcr.io/kube-controller-manager:v1.24.2
	k8s.gcr.io/kube-proxy:v1.24.2
	k8s.gcr.io/kube-scheduler:v1.24.2
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0725 13:34:09.305957   61786 docker.go:542] Images already preloaded, skipping extraction
	I0725 13:34:09.306037   61786 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 13:34:09.334786   61786 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.2
	k8s.gcr.io/kube-controller-manager:v1.24.2
	k8s.gcr.io/kube-scheduler:v1.24.2
	k8s.gcr.io/kube-proxy:v1.24.2
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0725 13:34:09.334808   61786 cache_images.go:84] Images are preloaded, skipping loading
	I0725 13:34:09.334878   61786 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0725 13:34:09.407682   61786 cni.go:95] Creating CNI manager for ""
	I0725 13:34:09.407694   61786 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 13:34:09.407709   61786 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0725 13:34:09.407726   61786 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220725133258-44543 NodeName:default-k8s-different-port-20220725133258-44543 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 Cgr
oupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0725 13:34:09.407863   61786 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "default-k8s-different-port-20220725133258-44543"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 13:34:09.407969   61786 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=default-k8s-different-port-20220725133258-44543 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port-20220725133258-44543 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0725 13:34:09.408026   61786 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
	I0725 13:34:09.415906   61786 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 13:34:09.415950   61786 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 13:34:09.422916   61786 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (509 bytes)
	I0725 13:34:09.435999   61786 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 13:34:09.447804   61786 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2069 bytes)
	I0725 13:34:09.459767   61786 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0725 13:34:09.463350   61786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 13:34:09.472214   61786 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/default-k8s-different-port-20220725133258-44543 for IP: 192.168.76.2
	I0725 13:34:09.472328   61786 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.key
	I0725 13:34:09.472377   61786 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.key
	I0725 13:34:09.472455   61786 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/default-k8s-different-port-20220725133258-44543/client.key
	I0725 13:34:09.472518   61786 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/default-k8s-different-port-20220725133258-44543/apiserver.key.31bdca25
	I0725 13:34:09.472571   61786 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/default-k8s-different-port-20220725133258-44543/proxy-client.key
	I0725 13:34:09.472770   61786 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543.pem (1338 bytes)
	W0725 13:34:09.472821   61786 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543_empty.pem, impossibly tiny 0 bytes
	I0725 13:34:09.472840   61786 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 13:34:09.472875   61786 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem (1082 bytes)
	I0725 13:34:09.472906   61786 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem (1123 bytes)
	I0725 13:34:09.472936   61786 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem (1675 bytes)
	I0725 13:34:09.473004   61786 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem (1708 bytes)
	I0725 13:34:09.473565   61786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/default-k8s-different-port-20220725133258-44543/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0725 13:34:09.490187   61786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/default-k8s-different-port-20220725133258-44543/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 13:34:09.506643   61786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/default-k8s-different-port-20220725133258-44543/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 13:34:09.523366   61786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/default-k8s-different-port-20220725133258-44543/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0725 13:34:09.539862   61786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 13:34:09.556235   61786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 13:34:09.572084   61786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 13:34:09.588997   61786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0725 13:34:09.605403   61786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem --> /usr/share/ca-certificates/445432.pem (1708 bytes)
	I0725 13:34:09.622071   61786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 13:34:09.639455   61786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543.pem --> /usr/share/ca-certificates/44543.pem (1338 bytes)
	I0725 13:34:09.666648   61786 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 13:34:09.680404   61786 ssh_runner.go:195] Run: openssl version
	I0725 13:34:09.685377   61786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/445432.pem && ln -fs /usr/share/ca-certificates/445432.pem /etc/ssl/certs/445432.pem"
	I0725 13:34:09.692933   61786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/445432.pem
	I0725 13:34:09.696819   61786 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 25 19:24 /usr/share/ca-certificates/445432.pem
	I0725 13:34:09.696867   61786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/445432.pem
	I0725 13:34:09.701960   61786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/445432.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 13:34:09.709308   61786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 13:34:09.717057   61786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 13:34:09.721219   61786 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 25 19:19 /usr/share/ca-certificates/minikubeCA.pem
	I0725 13:34:09.721287   61786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 13:34:09.726658   61786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 13:34:09.733604   61786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44543.pem && ln -fs /usr/share/ca-certificates/44543.pem /etc/ssl/certs/44543.pem"
	I0725 13:34:09.741720   61786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44543.pem
	I0725 13:34:09.745497   61786 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 25 19:24 /usr/share/ca-certificates/44543.pem
	I0725 13:34:09.745548   61786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44543.pem
	I0725 13:34:09.751361   61786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/44543.pem /etc/ssl/certs/51391683.0"
	I0725 13:34:09.758844   61786 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220725133258-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port-20220725133258-4454
3 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 13:34:09.758948   61786 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 13:34:09.788556   61786 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 13:34:09.796138   61786 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0725 13:34:09.796155   61786 kubeadm.go:626] restartCluster start
	I0725 13:34:09.796211   61786 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 13:34:09.803427   61786 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:09.803495   61786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220725133258-44543
	I0725 13:34:09.877185   61786 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220725133258-44543" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	I0725 13:34:09.877366   61786 kubeconfig.go:127] "default-k8s-different-port-20220725133258-44543" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig - will repair!
	I0725 13:34:09.877706   61786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig: {Name:mk33e723b045d7cbb2c524ed31a7e78a4a0b6415 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 13:34:09.878802   61786 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 13:34:09.886342   61786 api_server.go:165] Checking apiserver status ...
	I0725 13:34:09.886396   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:34:09.894462   61786 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:10.094989   61786 api_server.go:165] Checking apiserver status ...
	I0725 13:34:10.095125   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:34:10.104812   61786 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:10.296311   61786 api_server.go:165] Checking apiserver status ...
	I0725 13:34:10.296403   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:34:10.306824   61786 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:10.494856   61786 api_server.go:165] Checking apiserver status ...
	I0725 13:34:10.494967   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:34:10.505102   61786 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:10.696865   61786 api_server.go:165] Checking apiserver status ...
	I0725 13:34:10.697038   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:34:10.707693   61786 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:10.896785   61786 api_server.go:165] Checking apiserver status ...
	I0725 13:34:10.896969   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:34:10.907495   61786 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:11.097072   61786 api_server.go:165] Checking apiserver status ...
	I0725 13:34:11.097166   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:34:11.107646   61786 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:11.294983   61786 api_server.go:165] Checking apiserver status ...
	I0725 13:34:11.295100   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:34:11.304071   61786 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:11.496628   61786 api_server.go:165] Checking apiserver status ...
	I0725 13:34:11.496802   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:34:11.507122   61786 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:11.697167   61786 api_server.go:165] Checking apiserver status ...
	I0725 13:34:11.697382   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:34:11.708140   61786 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:11.896909   61786 api_server.go:165] Checking apiserver status ...
	I0725 13:34:11.897054   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:34:11.907309   61786 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:12.095351   61786 api_server.go:165] Checking apiserver status ...
	I0725 13:34:12.095504   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:34:12.107280   61786 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:12.297402   61786 api_server.go:165] Checking apiserver status ...
	I0725 13:34:12.297559   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:34:12.307933   61786 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:12.497420   61786 api_server.go:165] Checking apiserver status ...
	I0725 13:34:12.497620   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:34:12.509829   61786 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:12.697477   61786 api_server.go:165] Checking apiserver status ...
	I0725 13:34:12.697599   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:34:12.708129   61786 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:12.897571   61786 api_server.go:165] Checking apiserver status ...
	I0725 13:34:12.897712   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:34:12.908504   61786 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:12.908514   61786 api_server.go:165] Checking apiserver status ...
	I0725 13:34:12.908558   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:34:12.916432   61786 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:12.916446   61786 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0725 13:34:12.916453   61786 kubeadm.go:1092] stopping kube-system containers ...
	I0725 13:34:12.916512   61786 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 13:34:12.948692   61786 docker.go:443] Stopping containers: [21fb696c3038 51677fb6144c b476d4c9ea34 61aef9880797 d95939ee82e4 142e60195cf4 1a100f122ea6 381dfaca547b abf2f82e0e50 e65639a75c81 11bcb130ff7a d0a27f48f794 449a4cccfc67 5548f957dbdf 09e5b2b95ce2]
	I0725 13:34:12.948776   61786 ssh_runner.go:195] Run: docker stop 21fb696c3038 51677fb6144c b476d4c9ea34 61aef9880797 d95939ee82e4 142e60195cf4 1a100f122ea6 381dfaca547b abf2f82e0e50 e65639a75c81 11bcb130ff7a d0a27f48f794 449a4cccfc67 5548f957dbdf 09e5b2b95ce2
	I0725 13:34:12.979483   61786 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 13:34:12.989370   61786 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 13:34:12.996679   61786 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jul 25 20:33 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jul 25 20:33 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2127 Jul 25 20:33 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jul 25 20:33 /etc/kubernetes/scheduler.conf
	
	I0725 13:34:12.996731   61786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0725 13:34:13.003759   61786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0725 13:34:13.011125   61786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0725 13:34:13.018455   61786 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:13.018511   61786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 13:34:13.025510   61786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0725 13:34:13.033159   61786 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:13.033202   61786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 13:34:13.040388   61786 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 13:34:13.048073   61786 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0725 13:34:13.048082   61786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:34:13.093387   61786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:34:14.153710   61786 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.060020888s)
	I0725 13:34:14.153730   61786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:34:14.329681   61786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:34:14.375596   61786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:34:14.426060   61786 api_server.go:51] waiting for apiserver process to appear ...
	I0725 13:34:14.426130   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:34:14.937020   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:34:15.437072   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:34:15.498123   61786 api_server.go:71] duration metric: took 1.071801664s to wait for apiserver process to appear ...
	I0725 13:34:15.498156   61786 api_server.go:87] waiting for apiserver healthz status ...
	I0725 13:34:15.498176   61786 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60205/healthz ...
	I0725 13:34:15.499467   61786 api_server.go:256] stopped: https://127.0.0.1:60205/healthz: Get "https://127.0.0.1:60205/healthz": EOF
	I0725 13:34:16.000075   61786 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60205/healthz ...
	I0725 13:34:19.004558   61786 api_server.go:266] https://127.0.0.1:60205/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 13:34:19.004576   61786 api_server.go:102] status: https://127.0.0.1:60205/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 13:34:19.500619   61786 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60205/healthz ...
	I0725 13:34:19.507069   61786 api_server.go:266] https://127.0.0.1:60205/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 13:34:19.507082   61786 api_server.go:102] status: https://127.0.0.1:60205/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 13:34:20.000615   61786 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60205/healthz ...
	I0725 13:34:20.006253   61786 api_server.go:266] https://127.0.0.1:60205/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 13:34:20.006267   61786 api_server.go:102] status: https://127.0.0.1:60205/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 13:34:20.500871   61786 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60205/healthz ...
	I0725 13:34:20.506841   61786 api_server.go:266] https://127.0.0.1:60205/healthz returned 200:
	ok
	I0725 13:34:20.513394   61786 api_server.go:140] control plane version: v1.24.2
	I0725 13:34:20.513410   61786 api_server.go:130] duration metric: took 5.014188979s to wait for apiserver health ...
	I0725 13:34:20.513416   61786 cni.go:95] Creating CNI manager for ""
	I0725 13:34:20.513426   61786 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 13:34:20.513437   61786 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 13:34:20.524394   61786 system_pods.go:59] 8 kube-system pods found
	I0725 13:34:20.524410   61786 system_pods.go:61] "coredns-6d4b75cb6d-ltpwj" [43fe43ee-d181-4a21-936f-c588e810d1b8] Running
	I0725 13:34:20.524414   61786 system_pods.go:61] "etcd-default-k8s-different-port-20220725133258-44543" [e409d4c7-e1f8-4825-b013-df9d0e6680d1] Running
	I0725 13:34:20.524422   61786 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220725133258-44543" [e373ecf2-4fb2-436f-b520-e05c162005e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0725 13:34:20.524429   61786 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220725133258-44543" [2203416f-f18e-4c6c-bf8f-62fe42f5d716] Running
	I0725 13:34:20.524433   61786 system_pods.go:61] "kube-proxy-bsbv8" [00380a03-69be-4582-bc91-be2e992a8756] Running
	I0725 13:34:20.524439   61786 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220725133258-44543" [ee2345ff-7e0e-4e32-a303-ec8637f9a6e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 13:34:20.524446   61786 system_pods.go:61] "metrics-server-5c6f97fb75-dt6cw" [5f26aec3-73de-457a-ab6e-6b8db807386c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 13:34:20.524451   61786 system_pods.go:61] "storage-provisioner" [872443cb-9c58-4914-bfd8-9c919c4c2729] Running
	I0725 13:34:20.524454   61786 system_pods.go:74] duration metric: took 11.01124ms to wait for pod list to return data ...
	I0725 13:34:20.524461   61786 node_conditions.go:102] verifying NodePressure condition ...
	I0725 13:34:20.528607   61786 node_conditions.go:122] node storage ephemeral capacity is 107077304Ki
	I0725 13:34:20.528628   61786 node_conditions.go:123] node cpu capacity is 6
	I0725 13:34:20.528639   61786 node_conditions.go:105] duration metric: took 4.173368ms to run NodePressure ...
	I0725 13:34:20.528651   61786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:34:20.692164   61786 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0725 13:34:20.699554   61786 kubeadm.go:777] kubelet initialised
	I0725 13:34:20.699567   61786 kubeadm.go:778] duration metric: took 7.38462ms waiting for restarted kubelet to initialise ...
	I0725 13:34:20.699575   61786 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 13:34:20.706455   61786 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-ltpwj" in "kube-system" namespace to be "Ready" ...
	I0725 13:34:20.713491   61786 pod_ready.go:92] pod "coredns-6d4b75cb6d-ltpwj" in "kube-system" namespace has status "Ready":"True"
	I0725 13:34:20.713502   61786 pod_ready.go:81] duration metric: took 7.031927ms waiting for pod "coredns-6d4b75cb6d-ltpwj" in "kube-system" namespace to be "Ready" ...
	I0725 13:34:20.713509   61786 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:34:20.720234   61786 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:34:20.720245   61786 pod_ready.go:81] duration metric: took 6.729713ms waiting for pod "etcd-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:34:20.720266   61786 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:34:22.736135   61786 pod_ready.go:102] pod "kube-apiserver-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace has status "Ready":"False"
	I0725 13:34:24.737145   61786 pod_ready.go:102] pod "kube-apiserver-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace has status "Ready":"False"
	I0725 13:34:26.739266   61786 pod_ready.go:102] pod "kube-apiserver-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace has status "Ready":"False"
	I0725 13:34:28.739639   61786 pod_ready.go:102] pod "kube-apiserver-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace has status "Ready":"False"
	I0725 13:34:30.237640   61786 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:34:30.237653   61786 pod_ready.go:81] duration metric: took 9.516001331s waiting for pod "kube-apiserver-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:34:30.237660   61786 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:34:30.747406   61786 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:34:30.747419   61786 pod_ready.go:81] duration metric: took 509.699097ms waiting for pod "kube-controller-manager-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:34:30.747427   61786 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bsbv8" in "kube-system" namespace to be "Ready" ...
	I0725 13:34:30.752023   61786 pod_ready.go:92] pod "kube-proxy-bsbv8" in "kube-system" namespace has status "Ready":"True"
	I0725 13:34:30.752032   61786 pod_ready.go:81] duration metric: took 4.600741ms waiting for pod "kube-proxy-bsbv8" in "kube-system" namespace to be "Ready" ...
	I0725 13:34:30.752038   61786 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:34:30.756171   61786 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:34:30.756179   61786 pod_ready.go:81] duration metric: took 4.135517ms waiting for pod "kube-scheduler-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:34:30.756185   61786 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace to be "Ready" ...
	I0725 13:34:32.769583   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:34:35.266268   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:34:37.269544   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:34:39.767177   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:34:42.270129   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:34:44.770171   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:34:47.266382   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:34:49.270908   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:34:51.766788   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:34:53.770199   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:34:56.267530   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:34:58.270364   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:00.271370   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:02.770491   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:05.267811   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:07.268019   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:09.269175   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:11.771924   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:14.268303   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:16.269490   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:18.269647   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:20.269732   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:22.770167   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:25.272345   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:27.768425   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:29.772730   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:32.269716   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:34.272269   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:36.769112   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:38.770762   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:41.269690   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:43.270881   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:45.770163   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:47.770257   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:49.772467   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:52.271997   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:54.770220   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:56.770972   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:58.772954   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:01.271748   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:03.769893   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:05.771682   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:08.272756   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:10.772210   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:13.269694   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:15.271259   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:17.271758   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:19.771816   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:22.271153   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:24.273500   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:26.771501   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:28.772146   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:30.773207   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:33.272043   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:35.773055   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:38.271505   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:40.771959   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:43.271115   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:45.272040   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:47.272525   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:49.771562   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:52.272147   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:54.273010   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:56.274371   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:58.774235   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:01.274355   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:03.773714   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:05.773848   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:07.774416   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:10.272739   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:12.273147   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:14.774766   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:17.272082   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:19.273723   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:21.275454   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:23.774025   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:25.774734   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:28.275012   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:30.275495   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:32.775435   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:35.273955   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:37.773340   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:40.273624   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:42.275705   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:44.772990   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:46.776013   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:49.275522   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:51.776201   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:54.272799   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:56.276129   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:58.776449   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:38:01.276937   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:38:03.775275   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:38:05.776601   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:38:08.275066   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:38:10.773865   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:38:12.777289   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:38:15.276931   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:38:17.277015   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:38:19.777784   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:38:22.274664   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:38:24.277500   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:38:26.777501   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:38:28.777629   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:38:30.768525   61786 pod_ready.go:81] duration metric: took 4m0.003905943s waiting for pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace to be "Ready" ...
	E0725 13:38:30.768539   61786 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace to be "Ready" (will not retry!)
	I0725 13:38:30.768550   61786 pod_ready.go:38] duration metric: took 4m10.059123063s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 13:38:30.768619   61786 kubeadm.go:630] restartCluster took 4m20.959894497s
	W0725 13:38:30.768693   61786 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0725 13:38:30.768708   61786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0725 13:38:33.097038   61786 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (2.328248172s)
	I0725 13:38:33.097098   61786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 13:38:33.106317   61786 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 13:38:33.113479   61786 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0725 13:38:33.113523   61786 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 13:38:33.120573   61786 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 13:38:33.120592   61786 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0725 13:38:33.404685   61786 out.go:204]   - Generating certificates and keys ...
	I0725 13:38:34.559189   61786 out.go:204]   - Booting up control plane ...
	I0725 13:38:41.609910   61786 out.go:204]   - Configuring RBAC rules ...
	I0725 13:38:41.984727   61786 cni.go:95] Creating CNI manager for ""
	I0725 13:38:41.984743   61786 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 13:38:41.984776   61786 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 13:38:41.984845   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:41.984852   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=a5b59bcfc16aadb787d3d4f0635e06172b98dce6 minikube.k8s.io/name=default-k8s-different-port-20220725133258-44543 minikube.k8s.io/updated_at=2022_07_25T13_38_41_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:41.995147   61786 ops.go:34] apiserver oom_adj: -16
	I0725 13:38:42.131241   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:42.687779   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:43.189737   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:43.689797   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:44.188581   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:44.688143   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:45.189843   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:45.688099   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:46.189813   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:46.689855   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:47.189263   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:47.689377   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:48.188105   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:48.688418   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:49.187882   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:49.689992   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:50.188492   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:50.689222   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:51.190147   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:51.688570   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:52.188695   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:52.688302   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:53.189368   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:53.688182   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:54.188476   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:54.688006   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:54.741570   61786 kubeadm.go:1045] duration metric: took 12.756406696s to wait for elevateKubeSystemPrivileges.
	I0725 13:38:54.741586   61786 kubeadm.go:397] StartCluster complete in 4m44.96945209s
	I0725 13:38:54.741601   61786 settings.go:142] acquiring lock: {Name:mk9b5e011e5806157f9a122f8c65a6cc16a3d9e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 13:38:54.741678   61786 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	I0725 13:38:54.742213   61786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig: {Name:mk33e723b045d7cbb2c524ed31a7e78a4a0b6415 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 13:38:55.258629   61786 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220725133258-44543" rescaled to 1
	I0725 13:38:55.258668   61786 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 13:38:55.258680   61786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0725 13:38:55.258695   61786 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0725 13:38:55.282860   61786 out.go:177] * Verifying Kubernetes components...
	I0725 13:38:55.258826   61786 config.go:178] Loaded profile config "default-k8s-different-port-20220725133258-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0725 13:38:55.282931   61786 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220725133258-44543"
	I0725 13:38:55.282932   61786 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220725133258-44543"
	I0725 13:38:55.282939   61786 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220725133258-44543"
	I0725 13:38:55.282940   61786 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220725133258-44543"
	I0725 13:38:55.314162   61786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0725 13:38:55.355956   61786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 13:38:55.355963   61786 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220725133258-44543"
	I0725 13:38:55.355964   61786 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220725133258-44543"
	I0725 13:38:55.355964   61786 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220725133258-44543"
	I0725 13:38:55.355960   61786 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220725133258-44543"
	W0725 13:38:55.355985   61786 addons.go:162] addon dashboard should already be in state true
	W0725 13:38:55.355980   61786 addons.go:162] addon storage-provisioner should already be in state true
	W0725 13:38:55.355972   61786 addons.go:162] addon metrics-server should already be in state true
	I0725 13:38:55.356030   61786 host.go:66] Checking if "default-k8s-different-port-20220725133258-44543" exists ...
	I0725 13:38:55.356035   61786 host.go:66] Checking if "default-k8s-different-port-20220725133258-44543" exists ...
	I0725 13:38:55.356040   61786 host.go:66] Checking if "default-k8s-different-port-20220725133258-44543" exists ...
	I0725 13:38:55.356345   61786 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220725133258-44543 --format={{.State.Status}}
	I0725 13:38:55.356457   61786 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220725133258-44543 --format={{.State.Status}}
	I0725 13:38:55.356516   61786 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220725133258-44543 --format={{.State.Status}}
	I0725 13:38:55.357205   61786 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220725133258-44543 --format={{.State.Status}}
	I0725 13:38:55.377760   61786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220725133258-44543
	I0725 13:38:55.503130   61786 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220725133258-44543"
	I0725 13:38:55.522805   61786 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0725 13:38:55.522844   61786 addons.go:162] addon default-storageclass should already be in state true
	I0725 13:38:55.601767   61786 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0725 13:38:55.544055   61786 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 13:38:55.544114   61786 host.go:66] Checking if "default-k8s-different-port-20220725133258-44543" exists ...
	I0725 13:38:55.580996   61786 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0725 13:38:55.598483   61786 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220725133258-44543" to be "Ready" ...
	I0725 13:38:55.622916   61786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 13:38:55.622930   61786 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0725 13:38:55.622939   61786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0725 13:38:55.623010   61786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725133258-44543
	I0725 13:38:55.623338   61786 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220725133258-44543 --format={{.State.Status}}
	I0725 13:38:55.626513   61786 node_ready.go:49] node "default-k8s-different-port-20220725133258-44543" has status "Ready":"True"
	I0725 13:38:55.680887   61786 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0725 13:38:55.659957   61786 node_ready.go:38] duration metric: took 37.0254ms waiting for node "default-k8s-different-port-20220725133258-44543" to be "Ready" ...
	I0725 13:38:55.660000   61786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725133258-44543
	I0725 13:38:55.718012   61786 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 13:38:55.718127   61786 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0725 13:38:55.718145   61786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0725 13:38:55.718254   61786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725133258-44543
	I0725 13:38:55.729313   61786 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-krc4w" in "kube-system" namespace to be "Ready" ...
	I0725 13:38:55.755052   61786 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 13:38:55.755074   61786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 13:38:55.755201   61786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725133258-44543
	I0725 13:38:55.759149   61786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60201 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/default-k8s-different-port-20220725133258-44543/id_rsa Username:docker}
	I0725 13:38:55.819926   61786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60201 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/default-k8s-different-port-20220725133258-44543/id_rsa Username:docker}
	I0725 13:38:55.824327   61786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60201 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/default-k8s-different-port-20220725133258-44543/id_rsa Username:docker}
	I0725 13:38:55.852585   61786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60201 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/default-k8s-different-port-20220725133258-44543/id_rsa Username:docker}
	I0725 13:38:55.911420   61786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 13:38:55.930541   61786 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0725 13:38:55.930555   61786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0725 13:38:55.947415   61786 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0725 13:38:55.947430   61786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0725 13:38:56.015152   61786 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 13:38:56.015187   61786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0725 13:38:56.024307   61786 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0725 13:38:56.024322   61786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0725 13:38:56.037145   61786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 13:38:56.128060   61786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 13:38:56.208962   61786 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0725 13:38:56.208980   61786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0725 13:38:56.314257   61786 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0725 13:38:56.314275   61786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0725 13:38:56.500909   61786 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0725 13:38:56.500925   61786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0725 13:38:56.505830   61786 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.149864883s)
	I0725 13:38:56.505848   61786 start.go:809] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0725 13:38:56.539806   61786 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0725 13:38:56.539822   61786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0725 13:38:56.630424   61786 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0725 13:38:56.630457   61786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0725 13:38:56.706962   61786 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0725 13:38:56.706979   61786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0725 13:38:56.735501   61786 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0725 13:38:56.735519   61786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0725 13:38:56.740969   61786 pod_ready.go:97] error getting pod "coredns-6d4b75cb6d-krc4w" in "kube-system" namespace (skipping!): pods "coredns-6d4b75cb6d-krc4w" not found
	I0725 13:38:56.740985   61786 pod_ready.go:81] duration metric: took 1.011621188s waiting for pod "coredns-6d4b75cb6d-krc4w" in "kube-system" namespace to be "Ready" ...
	E0725 13:38:56.740999   61786 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-6d4b75cb6d-krc4w" in "kube-system" namespace (skipping!): pods "coredns-6d4b75cb6d-krc4w" not found
	I0725 13:38:56.741009   61786 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-whj7v" in "kube-system" namespace to be "Ready" ...
	I0725 13:38:56.818086   61786 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0725 13:38:56.818101   61786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0725 13:38:56.844768   61786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0725 13:38:56.929491   61786 addons.go:383] Verifying addon metrics-server=true in "default-k8s-different-port-20220725133258-44543"
	I0725 13:38:57.767005   61786 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0725 13:38:57.841616   61786 addons.go:414] enableAddons completed in 2.582848432s
	I0725 13:38:58.756866   61786 pod_ready.go:102] pod "coredns-6d4b75cb6d-whj7v" in "kube-system" namespace has status "Ready":"False"
	I0725 13:39:00.755719   61786 pod_ready.go:92] pod "coredns-6d4b75cb6d-whj7v" in "kube-system" namespace has status "Ready":"True"
	I0725 13:39:00.755735   61786 pod_ready.go:81] duration metric: took 4.014600528s waiting for pod "coredns-6d4b75cb6d-whj7v" in "kube-system" namespace to be "Ready" ...
	I0725 13:39:00.755745   61786 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:39:00.762033   61786 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:39:00.762042   61786 pod_ready.go:81] duration metric: took 6.291089ms waiting for pod "etcd-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:39:00.762049   61786 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:39:00.767591   61786 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:39:00.767601   61786 pod_ready.go:81] duration metric: took 5.547326ms waiting for pod "kube-apiserver-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:39:00.767610   61786 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:39:00.777675   61786 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:39:00.777686   61786 pod_ready.go:81] duration metric: took 10.069146ms waiting for pod "kube-controller-manager-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:39:00.777694   61786 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pdsqs" in "kube-system" namespace to be "Ready" ...
	I0725 13:39:00.783826   61786 pod_ready.go:92] pod "kube-proxy-pdsqs" in "kube-system" namespace has status "Ready":"True"
	I0725 13:39:00.783835   61786 pod_ready.go:81] duration metric: took 6.136533ms waiting for pod "kube-proxy-pdsqs" in "kube-system" namespace to be "Ready" ...
	I0725 13:39:00.783841   61786 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:39:01.152734   61786 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:39:01.152745   61786 pod_ready.go:81] duration metric: took 368.887729ms waiting for pod "kube-scheduler-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:39:01.152751   61786 pod_ready.go:38] duration metric: took 5.434537401s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 13:39:01.152763   61786 api_server.go:51] waiting for apiserver process to appear ...
	I0725 13:39:01.152815   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:39:01.178939   61786 api_server.go:71] duration metric: took 5.920074581s to wait for apiserver process to appear ...
	I0725 13:39:01.178955   61786 api_server.go:87] waiting for apiserver healthz status ...
	I0725 13:39:01.178962   61786 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60205/healthz ...
	I0725 13:39:01.184599   61786 api_server.go:266] https://127.0.0.1:60205/healthz returned 200:
	ok
	I0725 13:39:01.185840   61786 api_server.go:140] control plane version: v1.24.2
	I0725 13:39:01.185848   61786 api_server.go:130] duration metric: took 6.888886ms to wait for apiserver health ...
	I0725 13:39:01.185853   61786 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 13:39:01.356451   61786 system_pods.go:59] 8 kube-system pods found
	I0725 13:39:01.356466   61786 system_pods.go:61] "coredns-6d4b75cb6d-whj7v" [ee95aea1-d131-4524-a2e1-04d0c4da8e20] Running
	I0725 13:39:01.356470   61786 system_pods.go:61] "etcd-default-k8s-different-port-20220725133258-44543" [9971c3fd-8bc1-4799-825c-47d542d172cd] Running
	I0725 13:39:01.356474   61786 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220725133258-44543" [003259d7-d067-4c90-b5bf-34a9c60d430c] Running
	I0725 13:39:01.356477   61786 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220725133258-44543" [c292e2ae-00d3-48c2-8d9a-e06a2301d358] Running
	I0725 13:39:01.356483   61786 system_pods.go:61] "kube-proxy-pdsqs" [ab647055-f1f8-4144-a7a2-1d7a7da1e1cf] Running
	I0725 13:39:01.356496   61786 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220725133258-44543" [2c7cb5cc-4c11-4aee-a9c7-9e657d1b3610] Running
	I0725 13:39:01.356502   61786 system_pods.go:61] "metrics-server-5c6f97fb75-6tbqr" [d56ebb3b-88b9-4c7b-a6fe-1d145d3d62e9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 13:39:01.356511   61786 system_pods.go:61] "storage-provisioner" [a99d3e3f-11b6-4b57-9e40-e684accad53d] Running
	I0725 13:39:01.356515   61786 system_pods.go:74] duration metric: took 170.651524ms to wait for pod list to return data ...
	I0725 13:39:01.356521   61786 default_sa.go:34] waiting for default service account to be created ...
	I0725 13:39:01.553425   61786 default_sa.go:45] found service account: "default"
	I0725 13:39:01.553439   61786 default_sa.go:55] duration metric: took 196.906744ms for default service account to be created ...
	I0725 13:39:01.553446   61786 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 13:39:01.754454   61786 system_pods.go:86] 8 kube-system pods found
	I0725 13:39:01.754469   61786 system_pods.go:89] "coredns-6d4b75cb6d-whj7v" [ee95aea1-d131-4524-a2e1-04d0c4da8e20] Running
	I0725 13:39:01.754473   61786 system_pods.go:89] "etcd-default-k8s-different-port-20220725133258-44543" [9971c3fd-8bc1-4799-825c-47d542d172cd] Running
	I0725 13:39:01.754477   61786 system_pods.go:89] "kube-apiserver-default-k8s-different-port-20220725133258-44543" [003259d7-d067-4c90-b5bf-34a9c60d430c] Running
	I0725 13:39:01.754481   61786 system_pods.go:89] "kube-controller-manager-default-k8s-different-port-20220725133258-44543" [c292e2ae-00d3-48c2-8d9a-e06a2301d358] Running
	I0725 13:39:01.754484   61786 system_pods.go:89] "kube-proxy-pdsqs" [ab647055-f1f8-4144-a7a2-1d7a7da1e1cf] Running
	I0725 13:39:01.754488   61786 system_pods.go:89] "kube-scheduler-default-k8s-different-port-20220725133258-44543" [2c7cb5cc-4c11-4aee-a9c7-9e657d1b3610] Running
	I0725 13:39:01.754496   61786 system_pods.go:89] "metrics-server-5c6f97fb75-6tbqr" [d56ebb3b-88b9-4c7b-a6fe-1d145d3d62e9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 13:39:01.754500   61786 system_pods.go:89] "storage-provisioner" [a99d3e3f-11b6-4b57-9e40-e684accad53d] Running
	I0725 13:39:01.754505   61786 system_pods.go:126] duration metric: took 201.049861ms to wait for k8s-apps to be running ...
	I0725 13:39:01.754512   61786 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 13:39:01.754564   61786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 13:39:01.765642   61786 system_svc.go:56] duration metric: took 11.126618ms WaitForService to wait for kubelet.
	I0725 13:39:01.765659   61786 kubeadm.go:572] duration metric: took 6.506780003s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0725 13:39:01.765680   61786 node_conditions.go:102] verifying NodePressure condition ...
	I0725 13:39:01.952036   61786 node_conditions.go:122] node storage ephemeral capacity is 107077304Ki
	I0725 13:39:01.952050   61786 node_conditions.go:123] node cpu capacity is 6
	I0725 13:39:01.952056   61786 node_conditions.go:105] duration metric: took 186.353687ms to run NodePressure ...
	I0725 13:39:01.952064   61786 start.go:216] waiting for startup goroutines ...
	I0725 13:39:01.984984   61786 start.go:506] kubectl: 1.24.1, cluster: 1.24.2 (minor skew: 0)
	I0725 13:39:02.007662   61786 out.go:177] * Done! kubectl is now configured to use "default-k8s-different-port-20220725133258-44543" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Mon 2022-07-25 20:34:06 UTC, end at Mon 2022-07-25 20:39:54 UTC. --
	Jul 25 20:38:31 default-k8s-different-port-20220725133258-44543 dockerd[511]: time="2022-07-25T20:38:31.780546269Z" level=info msg="ignoring event" container=9f17b99348d5596db9ef9d83cceeddd9d57ecfe2b1d64c01af3d124a1cd12097 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:38:31 default-k8s-different-port-20220725133258-44543 dockerd[511]: time="2022-07-25T20:38:31.851226868Z" level=info msg="ignoring event" container=8cb4b870a649edb892f171c0844868e810cf8e34aaa667824c5b0c22a16007c1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:38:31 default-k8s-different-port-20220725133258-44543 dockerd[511]: time="2022-07-25T20:38:31.920510787Z" level=info msg="ignoring event" container=af6ce34ba71f8366ec5994bd40ce2f0bb04ed60ad8da8694c748579a5cc36b24 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:38:31 default-k8s-different-port-20220725133258-44543 dockerd[511]: time="2022-07-25T20:38:31.987865302Z" level=info msg="ignoring event" container=d95eb60deebbaf8a7dfc963fe05a0cd0b031255dfc2dfa0130d3e45e5c935851 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:38:32 default-k8s-different-port-20220725133258-44543 dockerd[511]: time="2022-07-25T20:38:32.111076809Z" level=info msg="ignoring event" container=dd7a0ff0a2e48b9846aa099283eeac08074b71a4e94c8f84d3620802af731e44 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:38:32 default-k8s-different-port-20220725133258-44543 dockerd[511]: time="2022-07-25T20:38:32.178903654Z" level=info msg="ignoring event" container=845949fda78bed43cb8c5994005f590bca7ac0830c92d42a03b27ab0fef66f27 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:38:32 default-k8s-different-port-20220725133258-44543 dockerd[511]: time="2022-07-25T20:38:32.243971301Z" level=info msg="ignoring event" container=e90bb6e19089f3c4258acfd3a71f90956f641d56a20d2601ab9428f766327432 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:38:32 default-k8s-different-port-20220725133258-44543 dockerd[511]: time="2022-07-25T20:38:32.310152312Z" level=info msg="ignoring event" container=c8e1d3b2d1d6656576e473940dae274e7234ab61c29132ced7c0923b596e690c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:38:32 default-k8s-different-port-20220725133258-44543 dockerd[511]: time="2022-07-25T20:38:32.390719334Z" level=info msg="ignoring event" container=393b69df1d6b40d6275bca11660c3304940bbe503a0ddd7d680fb10c1899f6f9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:38:32 default-k8s-different-port-20220725133258-44543 dockerd[511]: time="2022-07-25T20:38:32.458214100Z" level=info msg="ignoring event" container=c3157f62a5c25544457b46deb9ab6075345b1560a91925f4324144581034a0f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:38:32 default-k8s-different-port-20220725133258-44543 dockerd[511]: time="2022-07-25T20:38:32.574054948Z" level=info msg="ignoring event" container=179f3e9d56294f3398f8295ba86abb6973eff67c719c3e23af9a1b952e23ca79 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:38:32 default-k8s-different-port-20220725133258-44543 dockerd[511]: time="2022-07-25T20:38:32.637501332Z" level=info msg="ignoring event" container=313d52e40630bbe76b1d83bee9325b607ed310383148576b61095113107e2c49 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:38:32 default-k8s-different-port-20220725133258-44543 dockerd[511]: time="2022-07-25T20:38:32.723857430Z" level=info msg="ignoring event" container=42e0c857bff2751624cf59c62181e9a07194d6a1dc7084b6eceb87e27aa6dfe8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:38:55 default-k8s-different-port-20220725133258-44543 dockerd[511]: time="2022-07-25T20:38:55.220746966Z" level=info msg="ignoring event" container=09e496a283e3161035a3e331f928d8cfcbc2c9d828c344bd38ada8728c02a5a5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:38:58 default-k8s-different-port-20220725133258-44543 dockerd[511]: time="2022-07-25T20:38:58.070318985Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 20:38:58 default-k8s-different-port-20220725133258-44543 dockerd[511]: time="2022-07-25T20:38:58.070344443Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 20:38:58 default-k8s-different-port-20220725133258-44543 dockerd[511]: time="2022-07-25T20:38:58.071489449Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 20:38:59 default-k8s-different-port-20220725133258-44543 dockerd[511]: time="2022-07-25T20:38:59.265533438Z" level=warning msg="reference for unknown type: " digest="sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3" remote="docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3"
	Jul 25 20:39:04 default-k8s-different-port-20220725133258-44543 dockerd[511]: time="2022-07-25T20:39:04.835967380Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Jul 25 20:39:05 default-k8s-different-port-20220725133258-44543 dockerd[511]: time="2022-07-25T20:39:05.137907004Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Jul 25 20:39:08 default-k8s-different-port-20220725133258-44543 dockerd[511]: time="2022-07-25T20:39:08.519826084Z" level=info msg="ignoring event" container=28616caa6bd441c623c10d57da94a030b55c88dd7b42c14ca61630f4cf26e4eb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:39:09 default-k8s-different-port-20220725133258-44543 dockerd[511]: time="2022-07-25T20:39:09.271561451Z" level=info msg="ignoring event" container=73a466ed4aae161455e2519979dbf041a8fe4764d65b6e5888b5b9d954c98d64 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:39:13 default-k8s-different-port-20220725133258-44543 dockerd[511]: time="2022-07-25T20:39:13.058648295Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 20:39:13 default-k8s-different-port-20220725133258-44543 dockerd[511]: time="2022-07-25T20:39:13.058740049Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 20:39:13 default-k8s-different-port-20220725133258-44543 dockerd[511]: time="2022-07-25T20:39:13.059903138Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	73a466ed4aae1       a90209bb39e3d                                                                                    45 seconds ago       Exited              dashboard-metrics-scraper   1                   b0e71245e542b
	1ed4c1abade58       kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3   50 seconds ago       Running             kubernetes-dashboard        0                   41f7b8202d563
	a83ee8e474c19       6e38f40d628db                                                                                    57 seconds ago       Running             storage-provisioner         0                   82596d03739af
	08fc61a891721       a4ca41631cc7a                                                                                    58 seconds ago       Running             coredns                     0                   b2344f0b5438f
	8115f30a19008       a634548d10b03                                                                                    59 seconds ago       Running             kube-proxy                  0                   f5ecaa90b9ca4
	bff25f1c21fc0       d3377ffb7177c                                                                                    About a minute ago   Running             kube-apiserver              0                   665915a11c73c
	f26823dfc5081       aebe758cef4cd                                                                                    About a minute ago   Running             etcd                        0                   7a7dc57d5af33
	6eb33669f1b14       34cdf99b1bb3b                                                                                    About a minute ago   Running             kube-controller-manager     0                   dc64aac1af48a
	b785402e83334       5d725196c1f47                                                                                    About a minute ago   Running             kube-scheduler              0                   49d920006ee98
	
	* 
	* ==> coredns [08fc61a89172] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20220725133258-44543
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20220725133258-44543
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a5b59bcfc16aadb787d3d4f0635e06172b98dce6
	                    minikube.k8s.io/name=default-k8s-different-port-20220725133258-44543
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_07_25T13_38_41_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 25 Jul 2022 20:38:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20220725133258-44543
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 25 Jul 2022 20:39:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 25 Jul 2022 20:39:51 +0000   Mon, 25 Jul 2022 20:38:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 25 Jul 2022 20:39:51 +0000   Mon, 25 Jul 2022 20:38:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 25 Jul 2022 20:39:51 +0000   Mon, 25 Jul 2022 20:38:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 25 Jul 2022 20:39:51 +0000   Mon, 25 Jul 2022 20:38:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-different-port-20220725133258-44543
	Capacity:
	  cpu:                6
	  ephemeral-storage:  107077304Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  107077304Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 855c6c72c86b4657b3d8c3c774fd7e1d
	  System UUID:                4c41b13b-a64f-4800-b37e-f3f5767d3eeb
	  Boot ID:                    f0b8333d-7eba-4eec-b9ed-6046a2426c0d
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.2
	  Kube-Proxy Version:         v1.24.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-whj7v                                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     60s
	  kube-system                 etcd-default-k8s-different-port-20220725133258-44543                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         72s
	  kube-system                 kube-apiserver-default-k8s-different-port-20220725133258-44543             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 kube-controller-manager-default-k8s-different-port-20220725133258-44543    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 kube-proxy-pdsqs                                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         60s
	  kube-system                 kube-scheduler-default-k8s-different-port-20220725133258-44543             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 metrics-server-5c6f97fb75-6tbqr                                            100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         58s
	  kube-system                 storage-provisioner                                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	  kubernetes-dashboard        dashboard-metrics-scraper-dffd48c4c-nz9jv                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	  kubernetes-dashboard        kubernetes-dashboard-5fd5574d9f-7tp4v                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 58s   kube-proxy       
	  Normal  Starting                 73s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  73s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  72s   kubelet          Node default-k8s-different-port-20220725133258-44543 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    72s   kubelet          Node default-k8s-different-port-20220725133258-44543 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     72s   kubelet          Node default-k8s-different-port-20220725133258-44543 status is now: NodeHasSufficientPID
	  Normal  NodeReady                62s   kubelet          Node default-k8s-different-port-20220725133258-44543 status is now: NodeReady
	  Normal  RegisteredNode           61s   node-controller  Node default-k8s-different-port-20220725133258-44543 event: Registered Node default-k8s-different-port-20220725133258-44543 in Controller
	  Normal  Starting                 3s    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  3s    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3s    kubelet          Node default-k8s-different-port-20220725133258-44543 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s    kubelet          Node default-k8s-different-port-20220725133258-44543 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s    kubelet          Node default-k8s-different-port-20220725133258-44543 status is now: NodeHasSufficientPID
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [f26823dfc508] <==
	* {"level":"info","ts":"2022-07-25T20:38:36.042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2022-07-25T20:38:36.042Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2022-07-25T20:38:36.044Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-07-25T20:38:36.044Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-25T20:38:36.044Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-25T20:38:36.044Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-07-25T20:38:36.044Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-07-25T20:38:36.437Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2022-07-25T20:38:36.438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-07-25T20:38:36.438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2022-07-25T20:38:36.438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2022-07-25T20:38:36.438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-07-25T20:38:36.438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2022-07-25T20:38:36.438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-07-25T20:38:36.438Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T20:38:36.439Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T20:38:36.439Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T20:38:36.439Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T20:38:36.439Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:default-k8s-different-port-20220725133258-44543 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-25T20:38:36.439Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-25T20:38:36.439Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-25T20:38:36.440Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2022-07-25T20:38:36.440Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-25T20:38:36.440Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-07-25T20:38:36.458Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  20:39:54 up  1:21,  0 users,  load average: 0.65, 0.87, 1.05
	Linux default-k8s-different-port-20220725133258-44543 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [bff25f1c21fc] <==
	* I0725 20:38:39.741187       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0725 20:38:40.008707       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0725 20:38:40.034649       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0725 20:38:40.144388       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0725 20:38:40.147823       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0725 20:38:40.148461       1 controller.go:611] quota admission added evaluator for: endpoints
	I0725 20:38:40.151051       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0725 20:38:40.934763       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0725 20:38:41.798602       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0725 20:38:41.803847       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0725 20:38:41.813461       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0725 20:38:41.898584       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0725 20:38:54.639342       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0725 20:38:54.690800       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0725 20:38:56.203335       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0725 20:38:56.919007       1 alloc.go:327] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.96.104.185]
	I0725 20:38:57.723139       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.98.150.230]
	I0725 20:38:57.733509       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.109.118.123]
	W0725 20:38:57.738239       1 handler_proxy.go:102] no RequestInfo found in the context
	E0725 20:38:57.738672       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0725 20:38:57.738749       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0725 20:38:57.738687       1 handler_proxy.go:102] no RequestInfo found in the context
	E0725 20:38:57.738998       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0725 20:38:57.740807       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [6eb33669f1b1] <==
	* I0725 20:38:54.792048       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-krc4w"
	I0725 20:38:54.799830       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-whj7v"
	I0725 20:38:54.817070       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-6d4b75cb6d-krc4w"
	I0725 20:38:56.739740       1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-5c6f97fb75 to 1"
	I0725 20:38:56.742598       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-5c6f97fb75-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0725 20:38:56.747741       1 replica_set.go:550] sync "kube-system/metrics-server-5c6f97fb75" failed with pods "metrics-server-5c6f97fb75-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0725 20:38:56.806125       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c6f97fb75-6tbqr"
	I0725 20:38:57.628213       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-dffd48c4c to 1"
	I0725 20:38:57.633566       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0725 20:38:57.636299       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-5fd5574d9f to 1"
	E0725 20:38:57.638508       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0725 20:38:57.641539       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0725 20:38:57.643379       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0725 20:38:57.643493       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0725 20:38:57.647538       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0725 20:38:57.650992       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0725 20:38:57.651053       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0725 20:38:57.654821       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0725 20:38:57.654996       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0725 20:38:57.658762       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0725 20:38:57.658815       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0725 20:38:57.668216       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5fd5574d9f-7tp4v"
	I0725 20:38:57.704905       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-dffd48c4c-nz9jv"
	E0725 20:39:51.582855       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0725 20:39:51.590715       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [8115f30a1900] <==
	* I0725 20:38:55.953417       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0725 20:38:55.953700       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0725 20:38:55.953907       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0725 20:38:56.128544       1 server_others.go:206] "Using iptables Proxier"
	I0725 20:38:56.129123       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0725 20:38:56.129204       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0725 20:38:56.129217       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0725 20:38:56.129407       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0725 20:38:56.139319       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0725 20:38:56.139532       1 server.go:661] "Version info" version="v1.24.2"
	I0725 20:38:56.139559       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 20:38:56.200739       1 config.go:444] "Starting node config controller"
	I0725 20:38:56.200768       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0725 20:38:56.201365       1 config.go:317] "Starting service config controller"
	I0725 20:38:56.201372       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0725 20:38:56.201385       1 config.go:226] "Starting endpoint slice config controller"
	I0725 20:38:56.201387       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0725 20:38:56.301456       1 shared_informer.go:262] Caches are synced for node config
	I0725 20:38:56.301616       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0725 20:38:56.301651       1 shared_informer.go:262] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [b785402e8333] <==
	* W0725 20:38:38.828092       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0725 20:38:38.828100       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0725 20:38:38.828211       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0725 20:38:38.828223       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0725 20:38:38.828494       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0725 20:38:38.828531       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0725 20:38:38.828610       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0725 20:38:38.828610       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0725 20:38:38.828621       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0725 20:38:38.828625       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0725 20:38:38.829033       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0725 20:38:38.829066       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0725 20:38:38.829035       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0725 20:38:38.829080       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0725 20:38:39.654638       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0725 20:38:39.654660       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0725 20:38:39.683556       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0725 20:38:39.683606       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0725 20:38:39.737993       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0725 20:38:39.738046       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0725 20:38:39.894003       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0725 20:38:39.894076       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0725 20:38:39.899098       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0725 20:38:39.899134       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0725 20:38:41.560715       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2022-07-25 20:34:06 UTC, end at Mon 2022-07-25 20:39:55 UTC. --
	Jul 25 20:39:52 default-k8s-different-port-20220725133258-44543 kubelet[9591]: I0725 20:39:52.920285    9591 topology_manager.go:200] "Topology Admit Handler"
	Jul 25 20:39:52 default-k8s-different-port-20220725133258-44543 kubelet[9591]: I0725 20:39:52.920355    9591 topology_manager.go:200] "Topology Admit Handler"
	Jul 25 20:39:52 default-k8s-different-port-20220725133258-44543 kubelet[9591]: I0725 20:39:52.920431    9591 topology_manager.go:200] "Topology Admit Handler"
	Jul 25 20:39:52 default-k8s-different-port-20220725133258-44543 kubelet[9591]: I0725 20:39:52.920479    9591 topology_manager.go:200] "Topology Admit Handler"
	Jul 25 20:39:52 default-k8s-different-port-20220725133258-44543 kubelet[9591]: I0725 20:39:52.986577    9591 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpgs8\" (UniqueName: \"kubernetes.io/projected/03cb3fda-d35f-4d4f-824b-390bfded730d-kube-api-access-tpgs8\") pod \"kubernetes-dashboard-5fd5574d9f-7tp4v\" (UID: \"03cb3fda-d35f-4d4f-824b-390bfded730d\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-7tp4v"
	Jul 25 20:39:52 default-k8s-different-port-20220725133258-44543 kubelet[9591]: I0725 20:39:52.986635    9591 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crfb5\" (UniqueName: \"kubernetes.io/projected/fc5b8eee-f995-4dd2-9453-12fa2acc79d8-kube-api-access-crfb5\") pod \"dashboard-metrics-scraper-dffd48c4c-nz9jv\" (UID: \"fc5b8eee-f995-4dd2-9453-12fa2acc79d8\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-nz9jv"
	Jul 25 20:39:52 default-k8s-different-port-20220725133258-44543 kubelet[9591]: I0725 20:39:52.986654    9591 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee95aea1-d131-4524-a2e1-04d0c4da8e20-config-volume\") pod \"coredns-6d4b75cb6d-whj7v\" (UID: \"ee95aea1-d131-4524-a2e1-04d0c4da8e20\") " pod="kube-system/coredns-6d4b75cb6d-whj7v"
	Jul 25 20:39:52 default-k8s-different-port-20220725133258-44543 kubelet[9591]: I0725 20:39:52.986671    9591 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a99d3e3f-11b6-4b57-9e40-e684accad53d-tmp\") pod \"storage-provisioner\" (UID: \"a99d3e3f-11b6-4b57-9e40-e684accad53d\") " pod="kube-system/storage-provisioner"
	Jul 25 20:39:52 default-k8s-different-port-20220725133258-44543 kubelet[9591]: I0725 20:39:52.986687    9591 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fc5b8eee-f995-4dd2-9453-12fa2acc79d8-tmp-volume\") pod \"dashboard-metrics-scraper-dffd48c4c-nz9jv\" (UID: \"fc5b8eee-f995-4dd2-9453-12fa2acc79d8\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-nz9jv"
	Jul 25 20:39:52 default-k8s-different-port-20220725133258-44543 kubelet[9591]: I0725 20:39:52.986734    9591 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/03cb3fda-d35f-4d4f-824b-390bfded730d-tmp-volume\") pod \"kubernetes-dashboard-5fd5574d9f-7tp4v\" (UID: \"03cb3fda-d35f-4d4f-824b-390bfded730d\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-7tp4v"
	Jul 25 20:39:52 default-k8s-different-port-20220725133258-44543 kubelet[9591]: I0725 20:39:52.986925    9591 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d56ebb3b-88b9-4c7b-a6fe-1d145d3d62e9-tmp-dir\") pod \"metrics-server-5c6f97fb75-6tbqr\" (UID: \"d56ebb3b-88b9-4c7b-a6fe-1d145d3d62e9\") " pod="kube-system/metrics-server-5c6f97fb75-6tbqr"
	Jul 25 20:39:52 default-k8s-different-port-20220725133258-44543 kubelet[9591]: I0725 20:39:52.986950    9591 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krc5g\" (UniqueName: \"kubernetes.io/projected/ab647055-f1f8-4144-a7a2-1d7a7da1e1cf-kube-api-access-krc5g\") pod \"kube-proxy-pdsqs\" (UID: \"ab647055-f1f8-4144-a7a2-1d7a7da1e1cf\") " pod="kube-system/kube-proxy-pdsqs"
	Jul 25 20:39:52 default-k8s-different-port-20220725133258-44543 kubelet[9591]: I0725 20:39:52.986998    9591 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnmjm\" (UniqueName: \"kubernetes.io/projected/d56ebb3b-88b9-4c7b-a6fe-1d145d3d62e9-kube-api-access-fnmjm\") pod \"metrics-server-5c6f97fb75-6tbqr\" (UID: \"d56ebb3b-88b9-4c7b-a6fe-1d145d3d62e9\") " pod="kube-system/metrics-server-5c6f97fb75-6tbqr"
	Jul 25 20:39:52 default-k8s-different-port-20220725133258-44543 kubelet[9591]: I0725 20:39:52.987081    9591 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ab647055-f1f8-4144-a7a2-1d7a7da1e1cf-kube-proxy\") pod \"kube-proxy-pdsqs\" (UID: \"ab647055-f1f8-4144-a7a2-1d7a7da1e1cf\") " pod="kube-system/kube-proxy-pdsqs"
	Jul 25 20:39:52 default-k8s-different-port-20220725133258-44543 kubelet[9591]: I0725 20:39:52.987120    9591 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab647055-f1f8-4144-a7a2-1d7a7da1e1cf-lib-modules\") pod \"kube-proxy-pdsqs\" (UID: \"ab647055-f1f8-4144-a7a2-1d7a7da1e1cf\") " pod="kube-system/kube-proxy-pdsqs"
	Jul 25 20:39:52 default-k8s-different-port-20220725133258-44543 kubelet[9591]: I0725 20:39:52.987157    9591 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ab647055-f1f8-4144-a7a2-1d7a7da1e1cf-xtables-lock\") pod \"kube-proxy-pdsqs\" (UID: \"ab647055-f1f8-4144-a7a2-1d7a7da1e1cf\") " pod="kube-system/kube-proxy-pdsqs"
	Jul 25 20:39:52 default-k8s-different-port-20220725133258-44543 kubelet[9591]: I0725 20:39:52.987177    9591 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfb4d\" (UniqueName: \"kubernetes.io/projected/ee95aea1-d131-4524-a2e1-04d0c4da8e20-kube-api-access-lfb4d\") pod \"coredns-6d4b75cb6d-whj7v\" (UID: \"ee95aea1-d131-4524-a2e1-04d0c4da8e20\") " pod="kube-system/coredns-6d4b75cb6d-whj7v"
	Jul 25 20:39:52 default-k8s-different-port-20220725133258-44543 kubelet[9591]: I0725 20:39:52.987192    9591 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5559b\" (UniqueName: \"kubernetes.io/projected/a99d3e3f-11b6-4b57-9e40-e684accad53d-kube-api-access-5559b\") pod \"storage-provisioner\" (UID: \"a99d3e3f-11b6-4b57-9e40-e684accad53d\") " pod="kube-system/storage-provisioner"
	Jul 25 20:39:52 default-k8s-different-port-20220725133258-44543 kubelet[9591]: I0725 20:39:52.987230    9591 reconciler.go:157] "Reconciler: start to sync state"
	Jul 25 20:39:54 default-k8s-different-port-20220725133258-44543 kubelet[9591]: I0725 20:39:54.117381    9591 request.go:601] Waited for 1.120408457s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8444/api/v1/namespaces/kube-system/pods
	Jul 25 20:39:54 default-k8s-different-port-20220725133258-44543 kubelet[9591]: E0725 20:39:54.168853    9591 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-default-k8s-different-port-20220725133258-44543\" already exists" pod="kube-system/etcd-default-k8s-different-port-20220725133258-44543"
	Jul 25 20:39:54 default-k8s-different-port-20220725133258-44543 kubelet[9591]: E0725 20:39:54.361984    9591 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-default-k8s-different-port-20220725133258-44543\" already exists" pod="kube-system/kube-scheduler-default-k8s-different-port-20220725133258-44543"
	Jul 25 20:39:54 default-k8s-different-port-20220725133258-44543 kubelet[9591]: E0725 20:39:54.521103    9591 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-default-k8s-different-port-20220725133258-44543\" already exists" pod="kube-system/kube-apiserver-default-k8s-different-port-20220725133258-44543"
	Jul 25 20:39:54 default-k8s-different-port-20220725133258-44543 kubelet[9591]: E0725 20:39:54.776661    9591 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-default-k8s-different-port-20220725133258-44543\" already exists" pod="kube-system/kube-controller-manager-default-k8s-different-port-20220725133258-44543"
	Jul 25 20:39:55 default-k8s-different-port-20220725133258-44543 kubelet[9591]: I0725 20:39:55.321076    9591 scope.go:110] "RemoveContainer" containerID="73a466ed4aae161455e2519979dbf041a8fe4764d65b6e5888b5b9d954c98d64"
	
	* 
	* ==> kubernetes-dashboard [1ed4c1abade5] <==
	* 2022/07/25 20:39:04 Using namespace: kubernetes-dashboard
	2022/07/25 20:39:04 Using in-cluster config to connect to apiserver
	2022/07/25 20:39:04 Using secret token for csrf signing
	2022/07/25 20:39:04 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/07/25 20:39:04 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/07/25 20:39:04 Successful initial request to the apiserver, version: v1.24.2
	2022/07/25 20:39:04 Generating JWE encryption key
	2022/07/25 20:39:04 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/07/25 20:39:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/07/25 20:39:04 Initializing JWE encryption key from synchronized object
	2022/07/25 20:39:04 Creating in-cluster Sidecar client
	2022/07/25 20:39:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/07/25 20:39:04 Serving insecurely on HTTP port: 9090
	2022/07/25 20:39:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/07/25 20:39:04 Starting overwatch
	
	* 
	* ==> storage-provisioner [a83ee8e474c1] <==
	* I0725 20:38:57.431178       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0725 20:38:57.448392       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0725 20:38:57.448467       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0725 20:38:57.504818       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0725 20:38:57.505016       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20220725133258-44543_8c774d3f-2c31-4654-9f23-95742b78792a!
	I0725 20:38:57.505686       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6fbd39bb-48a1-4285-acaa-c7fbedcd339f", APIVersion:"v1", ResourceVersion:"416", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-different-port-20220725133258-44543_8c774d3f-2c31-4654-9f23-95742b78792a became leader
	I0725 20:38:57.606298       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20220725133258-44543_8c774d3f-2c31-4654-9f23-95742b78792a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220725133258-44543 -n default-k8s-different-port-20220725133258-44543
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-different-port-20220725133258-44543 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-5c6f97fb75-6tbqr
helpers_test.go:272: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context default-k8s-different-port-20220725133258-44543 describe pod metrics-server-5c6f97fb75-6tbqr
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220725133258-44543 describe pod metrics-server-5c6f97fb75-6tbqr: exit status 1 (272.996034ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5c6f97fb75-6tbqr" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context default-k8s-different-port-20220725133258-44543 describe pod metrics-server-5c6f97fb75-6tbqr: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220725133258-44543
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220725133258-44543:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f636f825644219632d3312e5c418ac3a076af6794540372ef4c57f807ac470f8",
	        "Created": "2022-07-25T20:33:05.177797086Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 295788,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-25T20:34:05.868939191Z",
	            "FinishedAt": "2022-07-25T20:34:03.923396798Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/f636f825644219632d3312e5c418ac3a076af6794540372ef4c57f807ac470f8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f636f825644219632d3312e5c418ac3a076af6794540372ef4c57f807ac470f8/hostname",
	        "HostsPath": "/var/lib/docker/containers/f636f825644219632d3312e5c418ac3a076af6794540372ef4c57f807ac470f8/hosts",
	        "LogPath": "/var/lib/docker/containers/f636f825644219632d3312e5c418ac3a076af6794540372ef4c57f807ac470f8/f636f825644219632d3312e5c418ac3a076af6794540372ef4c57f807ac470f8-json.log",
	        "Name": "/default-k8s-different-port-20220725133258-44543",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220725133258-44543:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220725133258-44543",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/db3b3085e7a87bd97085d70f453700d2778711a2e70b85f11a3488a7a47d9597-init/diff:/var/lib/docker/overlay2/d54b85ebfe416686b2558e2807bd5eef62d8c3964c35bbc22b4965093a7c57f4/diff:/var/lib/docker/overlay2/5cb3ff6e7921802d23fd6cba2490df2c36857ad3d97eedcf6e537d1527a37f60/diff:/var/lib/docker/overlay2/e9074b1b2294cc5a00d3237646a71c4d30d89748eadbae26dd532d6a475c9851/diff:/var/lib/docker/overlay2/df8929c4859ecf2c8807040b0174a3f0dfa8da01532ea961cd7f4f2f0ccfbfc2/diff:/var/lib/docker/overlay2/177329168f9a1043ce94a2b7b32958779e8f685e85abf8e4c335d03d23295b26/diff:/var/lib/docker/overlay2/9eef19133d2d5e0111523234481658c1f73d7552db59842ad9ca687debda8fb3/diff:/var/lib/docker/overlay2/5c3a4cdf915d1e65e3003feefb9b9e649b432658e093e04971d5cb79e610dba9/diff:/var/lib/docker/overlay2/d1aeff9fecf983db994270b4acf2a2310c790c64d42c58d436aec3238f88b36c/diff:/var/lib/docker/overlay2/580dc19745fb3c1352983857ac5a555954893045702e7867e651e15a2d6d8732/diff:/var/lib/docker/overlay2/6c77b3
2028ad3ef74c2faf89b5080529c0391907ee9c1d7bbc00c25a5340238b/diff:/var/lib/docker/overlay2/e9e20374d05015c7a4f19eb8e14e663122cd2aef66d761bedfdef138551230b8/diff:/var/lib/docker/overlay2/cd6d64933d189089a7aaf08d7c7db50b89cc981d03b8272e4fdedb65495c3e4c/diff:/var/lib/docker/overlay2/768244a14ae888bb2a08747182035c24bd2a6a7a67f1ddae7b4e60efccb01339/diff:/var/lib/docker/overlay2/f90a95060678580a1a90ee9a563996249f7606bffcd06b680ef15234b56ab4a6/diff:/var/lib/docker/overlay2/49310159b9be5ac9bfa1dca18d816535ceb12e228963adafcf71f9d7d52d95ec/diff:/var/lib/docker/overlay2/cef5c40c08cddf01fe5ea252f4a28586c0bd1296ddc38fc37fc7e34af83c3c30/diff:/var/lib/docker/overlay2/b31bdcb9b1a69c5230400a4e1f5650c1cc8f9e27e673ff43708f9a450106733e/diff:/var/lib/docker/overlay2/899516301b3ffc21aee7cb9984f97d944fe5af0f2cd0bc5a5895bf187e4bff72/diff:/var/lib/docker/overlay2/f1bd0d2fc2ce458c0b6364dde56610c3d26bf8dc2cca2ae4e34a46f173f44595/diff:/var/lib/docker/overlay2/9f0da14f9c19d5a719554996b34f3ef80a0378921181aaf89ca41339ccc5bee3/diff:/var/lib/d
ocker/overlay2/4d6e8c40f7c65cbcfa827b62273eb37d2a0bb0407ee161b80523f11c157a52d4/diff:/var/lib/docker/overlay2/434355bebe182fb5c97b07ad8cf7a59b5474f8bc71379ff4f9690ffe31837e63/diff:/var/lib/docker/overlay2/128fb199261eb4fea9e79111861223ddb0d84438454cb7c0b9a61cc73d0796c3/diff:/var/lib/docker/overlay2/11c9cb7e5f15ac43115cb739da1135a53e08dc94ee1da27441e1d6c79eb73e62/diff:/var/lib/docker/overlay2/5e6bf225209b025da3db5a2d8862c3a0ee64417fa809d026e010644fc6619b48/diff:/var/lib/docker/overlay2/265a2d12f86abb8a607a06ec6f225186dfe622f7e23cbbf146a2ceffc7bc9bdc/diff:/var/lib/docker/overlay2/e9140b8aea381db0faf833cd2d12ff40267febe63f7c6bc2fa27384dbadf89bc/diff:/var/lib/docker/overlay2/07d7d0ab38cb08c95279e0706a65a6a4aeff8b5e72d559f8bc3dfcc0895654d6/diff:/var/lib/docker/overlay2/3a3d8346fb066863283ad39cf7f43615d7a1e99dc12aa7e8892d85d1c550bc63/diff:/var/lib/docker/overlay2/e5317b212815c832ebd2451ddd6e1f6922f350c5c1651ccfef7c5a8f34b7c1e3/diff:/var/lib/docker/overlay2/3bd7d98f0f0bf7ad2e29344a6ab4ca3211616e390da35077e76565c2074
df852/diff:/var/lib/docker/overlay2/ff63d2045cce1bb51b201079bba9d2b2a666b7331699b9407801b2fc00792bb3/diff:/var/lib/docker/overlay2/545bdf3f77f20e3db68875eda0a8829facb605119a630956ab79323688a3562a/diff:/var/lib/docker/overlay2/8f9b40124ec5ada59184358938542e028ca1e234ef1f6bce1816becf6ec8354e/diff:/var/lib/docker/overlay2/71b919cd2ece4207294cb577adab54fdba87cd9f7fdca1a53d6f376f87f11151/diff:/var/lib/docker/overlay2/b2d4a34fe28bd395d9b4ba0120173fc9df90c36a8a60a309ec088eca8930768c/diff:/var/lib/docker/overlay2/e986f180cbc34391c1ea6665283859b4c3c3061156ea1ff7196ffd869741e591/diff:/var/lib/docker/overlay2/cb98df203d42ea558f11fe84300b687cb65e6f3401b377a4466c9c8ef276ec59/diff:/var/lib/docker/overlay2/6371f9c0528fb1fb4a17bfcf3a34877fdd6d43dca1b5f06aff998d43d2010c46/diff:/var/lib/docker/overlay2/79ceef8e89c4094715c05bafb8c1427d09fb4a99f71f1a6186299303af352c98/diff:/var/lib/docker/overlay2/e034e76eeaff4e68d5158ffe54b55d122124ad82998be242fa08bec3010a0e64/diff",
	                "MergedDir": "/var/lib/docker/overlay2/db3b3085e7a87bd97085d70f453700d2778711a2e70b85f11a3488a7a47d9597/merged",
	                "UpperDir": "/var/lib/docker/overlay2/db3b3085e7a87bd97085d70f453700d2778711a2e70b85f11a3488a7a47d9597/diff",
	                "WorkDir": "/var/lib/docker/overlay2/db3b3085e7a87bd97085d70f453700d2778711a2e70b85f11a3488a7a47d9597/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220725133258-44543",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220725133258-44543/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220725133258-44543",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220725133258-44543",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220725133258-44543",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d959e3ed1a40b9c782cebf8e06ab414586b8721209217d047f2def227f431987",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60201"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60202"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60203"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60204"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60205"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d959e3ed1a40",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220725133258-44543": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "f636f8256442",
	                        "default-k8s-different-port-20220725133258-44543"
	                    ],
	                    "NetworkID": "9853642baa173155947bfa50253682e7a06d75a9d7b826ac454cf6940209077a",
	                    "EndpointID": "3010782d5e19a81c26507062b1803ef0d3c556cdf2a5d7d721c07bb03456f282",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220725133258-44543 -n default-k8s-different-port-20220725133258-44543
helpers_test.go:244: <<< TestStartStop/group/default-k8s-different-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p default-k8s-different-port-20220725133258-44543 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p default-k8s-different-port-20220725133258-44543 logs -n 25: (2.891904446s)
helpers_test.go:252: TestStartStop/group/default-k8s-different-port/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |                     Profile                     |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                | old-k8s-version-20220725131610-44543            | jenkins | v1.26.0 | 25 Jul 22 13:21 PDT |                     |
	|         | old-k8s-version-20220725131610-44543              |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                                 |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                 |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                                 |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                                 |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                                 |         |         |                     |                     |
	| ssh     | -p                                                | no-preload-20220725131741-44543                 | jenkins | v1.26.0 | 25 Jul 22 13:24 PDT | 25 Jul 22 13:24 PDT |
	|         | no-preload-20220725131741-44543                   |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                                 |         |         |                     |                     |
	| pause   | -p                                                | no-preload-20220725131741-44543                 | jenkins | v1.26.0 | 25 Jul 22 13:24 PDT | 25 Jul 22 13:24 PDT |
	|         | no-preload-20220725131741-44543                   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| unpause | -p                                                | no-preload-20220725131741-44543                 | jenkins | v1.26.0 | 25 Jul 22 13:25 PDT | 25 Jul 22 13:25 PDT |
	|         | no-preload-20220725131741-44543                   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                | no-preload-20220725131741-44543                 | jenkins | v1.26.0 | 25 Jul 22 13:25 PDT | 25 Jul 22 13:25 PDT |
	|         | no-preload-20220725131741-44543                   |                                                 |         |         |                     |                     |
	| delete  | -p                                                | no-preload-20220725131741-44543                 | jenkins | v1.26.0 | 25 Jul 22 13:25 PDT | 25 Jul 22 13:25 PDT |
	|         | no-preload-20220725131741-44543                   |                                                 |         |         |                     |                     |
	| start   | -p                                                | embed-certs-20220725132539-44543                | jenkins | v1.26.0 | 25 Jul 22 13:25 PDT | 25 Jul 22 13:26 PDT |
	|         | embed-certs-20220725132539-44543                  |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                                 |         |         |                     |                     |
	|         | --driver=docker                                   |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | embed-certs-20220725132539-44543                | jenkins | v1.26.0 | 25 Jul 22 13:26 PDT | 25 Jul 22 13:26 PDT |
	|         | embed-certs-20220725132539-44543                  |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                     |                     |
	| stop    | -p                                                | embed-certs-20220725132539-44543                | jenkins | v1.26.0 | 25 Jul 22 13:26 PDT | 25 Jul 22 13:26 PDT |
	|         | embed-certs-20220725132539-44543                  |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                               | embed-certs-20220725132539-44543                | jenkins | v1.26.0 | 25 Jul 22 13:26 PDT | 25 Jul 22 13:26 PDT |
	|         | embed-certs-20220725132539-44543                  |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                     |                     |
	| start   | -p                                                | embed-certs-20220725132539-44543                | jenkins | v1.26.0 | 25 Jul 22 13:26 PDT | 25 Jul 22 13:31 PDT |
	|         | embed-certs-20220725132539-44543                  |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                                 |         |         |                     |                     |
	|         | --driver=docker                                   |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |                                                 |         |         |                     |                     |
	| ssh     | -p                                                | embed-certs-20220725132539-44543                | jenkins | v1.26.0 | 25 Jul 22 13:32 PDT | 25 Jul 22 13:32 PDT |
	|         | embed-certs-20220725132539-44543                  |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                                 |         |         |                     |                     |
	| pause   | -p                                                | embed-certs-20220725132539-44543                | jenkins | v1.26.0 | 25 Jul 22 13:32 PDT | 25 Jul 22 13:32 PDT |
	|         | embed-certs-20220725132539-44543                  |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| unpause | -p                                                | embed-certs-20220725132539-44543                | jenkins | v1.26.0 | 25 Jul 22 13:32 PDT | 25 Jul 22 13:32 PDT |
	|         | embed-certs-20220725132539-44543                  |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                | embed-certs-20220725132539-44543                | jenkins | v1.26.0 | 25 Jul 22 13:32 PDT | 25 Jul 22 13:32 PDT |
	|         | embed-certs-20220725132539-44543                  |                                                 |         |         |                     |                     |
	| delete  | -p                                                | embed-certs-20220725132539-44543                | jenkins | v1.26.0 | 25 Jul 22 13:32 PDT | 25 Jul 22 13:32 PDT |
	|         | embed-certs-20220725132539-44543                  |                                                 |         |         |                     |                     |
	| delete  | -p                                                | disable-driver-mounts-20220725133257-44543      | jenkins | v1.26.0 | 25 Jul 22 13:32 PDT | 25 Jul 22 13:32 PDT |
	|         | disable-driver-mounts-20220725133257-44543        |                                                 |         |         |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:32 PDT | 25 Jul 22 13:33 PDT |
	|         | default-k8s-different-port-20220725133258-44543   |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:33 PDT | 25 Jul 22 13:33 PDT |
	|         | default-k8s-different-port-20220725133258-44543   |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                     |                     |
	| stop    | -p                                                | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:33 PDT | 25 Jul 22 13:34 PDT |
	|         | default-k8s-different-port-20220725133258-44543   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:34 PDT | 25 Jul 22 13:34 PDT |
	|         | default-k8s-different-port-20220725133258-44543   |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:34 PDT | 25 Jul 22 13:39 PDT |
	|         | default-k8s-different-port-20220725133258-44543   |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |                                                 |         |         |                     |                     |
	| ssh     | -p                                                | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:39 PDT | 25 Jul 22 13:39 PDT |
	|         | default-k8s-different-port-20220725133258-44543   |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                                 |         |         |                     |                     |
	| pause   | -p                                                | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:39 PDT | 25 Jul 22 13:39 PDT |
	|         | default-k8s-different-port-20220725133258-44543   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| unpause | -p                                                | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:39 PDT | 25 Jul 22 13:39 PDT |
	|         | default-k8s-different-port-20220725133258-44543   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/25 13:34:04
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 13:34:04.627172   61786 out.go:296] Setting OutFile to fd 1 ...
	I0725 13:34:04.627387   61786 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 13:34:04.627392   61786 out.go:309] Setting ErrFile to fd 2...
	I0725 13:34:04.627399   61786 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 13:34:04.627522   61786 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/bin
	I0725 13:34:04.628000   61786 out.go:303] Setting JSON to false
	I0725 13:34:04.642819   61786 start.go:115] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":16416,"bootTime":1658764828,"procs":355,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0725 13:34:04.642925   61786 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0725 13:34:04.664640   61786 out.go:177] * [default-k8s-different-port-20220725133258-44543] minikube v1.26.0 on Darwin 12.4
	I0725 13:34:04.706811   61786 notify.go:193] Checking for updates...
	I0725 13:34:04.728632   61786 out.go:177]   - MINIKUBE_LOCATION=14555
	I0725 13:34:04.750439   61786 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	I0725 13:34:04.771702   61786 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 13:34:04.793905   61786 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 13:34:04.815725   61786 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube
	I0725 13:34:04.838399   61786 config.go:178] Loaded profile config "default-k8s-different-port-20220725133258-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0725 13:34:04.839023   61786 driver.go:365] Setting default libvirt URI to qemu:///system
	I0725 13:34:04.910460   61786 docker.go:137] docker version: linux-20.10.17
	I0725 13:34:04.910592   61786 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 13:34:05.043298   61786 info.go:265] docker info: {ID:DEPZ:X5EQ:JUVI:7TAY:IXPP:M3QT:KGJK:NK3N:YSWM:HAHS:YLUF:RBE6 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 20:34:04.96917702 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 13:34:05.086988   61786 out.go:177] * Using the docker driver based on existing profile
	I0725 13:34:05.107975   61786 start.go:284] selected driver: docker
	I0725 13:34:05.108005   61786 start.go:808] validating driver "docker" against &{Name:default-k8s-different-port-20220725133258-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port
-20220725133258-44543 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:tru
e] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 13:34:05.108159   61786 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 13:34:05.111649   61786 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 13:34:05.244365   61786 info.go:265] docker info: {ID:DEPZ:X5EQ:JUVI:7TAY:IXPP:M3QT:KGJK:NK3N:YSWM:HAHS:YLUF:RBE6 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 20:34:05.170585413 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 13:34:05.244504   61786 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 13:34:05.244520   61786 cni.go:95] Creating CNI manager for ""
	I0725 13:34:05.244529   61786 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 13:34:05.244542   61786 start_flags.go:310] config:
	{Name:default-k8s-different-port-20220725133258-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port-20220725133258-44543 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Netw
ork: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 13:34:05.286913   61786 out.go:177] * Starting control plane node default-k8s-different-port-20220725133258-44543 in cluster default-k8s-different-port-20220725133258-44543
	I0725 13:34:05.308135   61786 cache.go:120] Beginning downloading kic base image for docker with docker
	I0725 13:34:05.329129   61786 out.go:177] * Pulling base image ...
	I0725 13:34:05.350052   61786 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0725 13:34:05.350055   61786 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0725 13:34:05.350147   61786 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4
	I0725 13:34:05.350159   61786 cache.go:57] Caching tarball of preloaded images
	I0725 13:34:05.350324   61786 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0725 13:34:05.350349   61786 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.2 on docker
	I0725 13:34:05.351198   61786 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/default-k8s-different-port-20220725133258-44543/config.json ...
	I0725 13:34:05.414334   61786 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0725 13:34:05.414359   61786 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0725 13:34:05.414371   61786 cache.go:208] Successfully downloaded all kic artifacts
	I0725 13:34:05.414420   61786 start.go:370] acquiring machines lock for default-k8s-different-port-20220725133258-44543: {Name:mk82259bc75cbca30138642157acc7c9a727ddb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 13:34:05.414495   61786 start.go:374] acquired machines lock for "default-k8s-different-port-20220725133258-44543" in 57.072µs
	I0725 13:34:05.414516   61786 start.go:95] Skipping create...Using existing machine configuration
	I0725 13:34:05.414526   61786 fix.go:55] fixHost starting: 
	I0725 13:34:05.414780   61786 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220725133258-44543 --format={{.State.Status}}
	I0725 13:34:05.481920   61786 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220725133258-44543: state=Stopped err=<nil>
	W0725 13:34:05.481949   61786 fix.go:129] unexpected machine state, will restart: <nil>
	I0725 13:34:05.504106   61786 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220725133258-44543" ...
	I0725 13:34:05.525512   61786 cli_runner.go:164] Run: docker start default-k8s-different-port-20220725133258-44543
	I0725 13:34:05.876454   61786 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220725133258-44543 --format={{.State.Status}}
	I0725 13:34:05.950069   61786 kic.go:415] container "default-k8s-different-port-20220725133258-44543" state is running.
	I0725 13:34:05.950674   61786 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220725133258-44543
	I0725 13:34:06.030858   61786 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/default-k8s-different-port-20220725133258-44543/config.json ...
	I0725 13:34:06.031375   61786 machine.go:88] provisioning docker machine ...
	I0725 13:34:06.031401   61786 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220725133258-44543"
	I0725 13:34:06.031482   61786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725133258-44543
	I0725 13:34:06.112519   61786 main.go:134] libmachine: Using SSH client type: native
	I0725 13:34:06.112732   61786 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 60201 <nil> <nil>}
	I0725 13:34:06.112746   61786 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220725133258-44543 && echo "default-k8s-different-port-20220725133258-44543" | sudo tee /etc/hostname
	I0725 13:34:06.239955   61786 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220725133258-44543
	
	I0725 13:34:06.240048   61786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725133258-44543
	I0725 13:34:06.314814   61786 main.go:134] libmachine: Using SSH client type: native
	I0725 13:34:06.314971   61786 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 60201 <nil> <nil>}
	I0725 13:34:06.314987   61786 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220725133258-44543' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220725133258-44543/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220725133258-44543' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 13:34:06.435146   61786 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 13:34:06.435164   61786 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube}
	I0725 13:34:06.435185   61786 ubuntu.go:177] setting up certificates
	I0725 13:34:06.435210   61786 provision.go:83] configureAuth start
	I0725 13:34:06.435282   61786 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220725133258-44543
	I0725 13:34:06.510139   61786 provision.go:138] copyHostCerts
	I0725 13:34:06.510295   61786 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem, removing ...
	I0725 13:34:06.510304   61786 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem
	I0725 13:34:06.510390   61786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem (1082 bytes)
	I0725 13:34:06.510624   61786 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem, removing ...
	I0725 13:34:06.510637   61786 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem
	I0725 13:34:06.510694   61786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem (1123 bytes)
	I0725 13:34:06.510842   61786 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem, removing ...
	I0725 13:34:06.510848   61786 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem
	I0725 13:34:06.510906   61786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem (1675 bytes)
	I0725 13:34:06.511027   61786 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220725133258-44543 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220725133258-44543]
	I0725 13:34:06.640290   61786 provision.go:172] copyRemoteCerts
	I0725 13:34:06.640354   61786 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 13:34:06.640397   61786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725133258-44543
	I0725 13:34:06.714183   61786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60201 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/default-k8s-different-port-20220725133258-44543/id_rsa Username:docker}
	I0725 13:34:06.800565   61786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0725 13:34:06.817495   61786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 13:34:06.835492   61786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0725 13:34:06.851531   61786 provision.go:86] duration metric: configureAuth took 416.13686ms
	I0725 13:34:06.851544   61786 ubuntu.go:193] setting minikube options for container-runtime
	I0725 13:34:06.851704   61786 config.go:178] Loaded profile config "default-k8s-different-port-20220725133258-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0725 13:34:06.851763   61786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725133258-44543
	I0725 13:34:06.922644   61786 main.go:134] libmachine: Using SSH client type: native
	I0725 13:34:06.922819   61786 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 60201 <nil> <nil>}
	I0725 13:34:06.922832   61786 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 13:34:07.045838   61786 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0725 13:34:07.045853   61786 ubuntu.go:71] root file system type: overlay
	I0725 13:34:07.046003   61786 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 13:34:07.046082   61786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725133258-44543
	I0725 13:34:07.116918   61786 main.go:134] libmachine: Using SSH client type: native
	I0725 13:34:07.117160   61786 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 60201 <nil> <nil>}
	I0725 13:34:07.117211   61786 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 13:34:07.249188   61786 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 13:34:07.249277   61786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725133258-44543
	I0725 13:34:07.319965   61786 main.go:134] libmachine: Using SSH client type: native
	I0725 13:34:07.320101   61786 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 60201 <nil> <nil>}
	I0725 13:34:07.320113   61786 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 13:34:07.446161   61786 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 13:34:07.446178   61786 machine.go:91] provisioned docker machine in 1.414225697s
	I0725 13:34:07.446188   61786 start.go:307] post-start starting for "default-k8s-different-port-20220725133258-44543" (driver="docker")
	I0725 13:34:07.446194   61786 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 13:34:07.446265   61786 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 13:34:07.446311   61786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725133258-44543
	I0725 13:34:07.517500   61786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60201 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/default-k8s-different-port-20220725133258-44543/id_rsa Username:docker}
	I0725 13:34:07.603110   61786 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 13:34:07.606519   61786 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0725 13:34:07.606534   61786 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0725 13:34:07.606542   61786 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0725 13:34:07.606551   61786 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0725 13:34:07.606561   61786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/addons for local assets ...
	I0725 13:34:07.606663   61786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files for local assets ...
	I0725 13:34:07.606798   61786 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem -> 445432.pem in /etc/ssl/certs
	I0725 13:34:07.606947   61786 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 13:34:07.613740   61786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem --> /etc/ssl/certs/445432.pem (1708 bytes)
	I0725 13:34:07.629891   61786 start.go:310] post-start completed in 183.624484ms
	I0725 13:34:07.629958   61786 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 13:34:07.630015   61786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725133258-44543
	I0725 13:34:07.700658   61786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60201 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/default-k8s-different-port-20220725133258-44543/id_rsa Username:docker}
	I0725 13:34:07.785856   61786 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0725 13:34:07.790469   61786 fix.go:57] fixHost completed within 2.37498005s
	I0725 13:34:07.790481   61786 start.go:82] releasing machines lock for "default-k8s-different-port-20220725133258-44543", held for 2.375014977s
	I0725 13:34:07.790547   61786 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220725133258-44543
	I0725 13:34:07.861116   61786 ssh_runner.go:195] Run: systemctl --version
	I0725 13:34:07.861126   61786 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0725 13:34:07.861183   61786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725133258-44543
	I0725 13:34:07.861199   61786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725133258-44543
	I0725 13:34:07.938182   61786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60201 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/default-k8s-different-port-20220725133258-44543/id_rsa Username:docker}
	I0725 13:34:07.940737   61786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60201 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/default-k8s-different-port-20220725133258-44543/id_rsa Username:docker}
	I0725 13:34:08.241696   61786 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 13:34:08.251517   61786 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0725 13:34:08.251594   61786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 13:34:08.264323   61786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 13:34:08.277213   61786 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 13:34:08.340273   61786 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 13:34:08.415371   61786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 13:34:08.483465   61786 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 13:34:08.713080   61786 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0725 13:34:08.784666   61786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 13:34:08.854074   61786 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0725 13:34:08.863345   61786 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0725 13:34:08.863409   61786 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0725 13:34:08.867141   61786 start.go:471] Will wait 60s for crictl version
	I0725 13:34:08.867182   61786 ssh_runner.go:195] Run: sudo crictl version
	I0725 13:34:08.968151   61786 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0725 13:34:08.968217   61786 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 13:34:09.002469   61786 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 13:34:09.063979   61786 out.go:204] * Preparing Kubernetes v1.24.2 on Docker 20.10.17 ...
	I0725 13:34:09.064085   61786 cli_runner.go:164] Run: docker exec -t default-k8s-different-port-20220725133258-44543 dig +short host.docker.internal
	I0725 13:34:09.191610   61786 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0725 13:34:09.191718   61786 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0725 13:34:09.195961   61786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 13:34:09.205544   61786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220725133258-44543
	I0725 13:34:09.276048   61786 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0725 13:34:09.276131   61786 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 13:34:09.305942   61786 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.2
	k8s.gcr.io/kube-controller-manager:v1.24.2
	k8s.gcr.io/kube-proxy:v1.24.2
	k8s.gcr.io/kube-scheduler:v1.24.2
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0725 13:34:09.305957   61786 docker.go:542] Images already preloaded, skipping extraction
	I0725 13:34:09.306037   61786 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 13:34:09.334786   61786 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.2
	k8s.gcr.io/kube-controller-manager:v1.24.2
	k8s.gcr.io/kube-scheduler:v1.24.2
	k8s.gcr.io/kube-proxy:v1.24.2
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0725 13:34:09.334808   61786 cache_images.go:84] Images are preloaded, skipping loading
	I0725 13:34:09.334878   61786 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0725 13:34:09.407682   61786 cni.go:95] Creating CNI manager for ""
	I0725 13:34:09.407694   61786 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 13:34:09.407709   61786 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0725 13:34:09.407726   61786 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220725133258-44543 NodeName:default-k8s-different-port-20220725133258-44543 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 Cgr
oupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0725 13:34:09.407863   61786 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "default-k8s-different-port-20220725133258-44543"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 13:34:09.407969   61786 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=default-k8s-different-port-20220725133258-44543 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port-20220725133258-44543 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0725 13:34:09.408026   61786 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
	I0725 13:34:09.415906   61786 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 13:34:09.415950   61786 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 13:34:09.422916   61786 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (509 bytes)
	I0725 13:34:09.435999   61786 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 13:34:09.447804   61786 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2069 bytes)
	I0725 13:34:09.459767   61786 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0725 13:34:09.463350   61786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 13:34:09.472214   61786 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/default-k8s-different-port-20220725133258-44543 for IP: 192.168.76.2
	I0725 13:34:09.472328   61786 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.key
	I0725 13:34:09.472377   61786 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.key
	I0725 13:34:09.472455   61786 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/default-k8s-different-port-20220725133258-44543/client.key
	I0725 13:34:09.472518   61786 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/default-k8s-different-port-20220725133258-44543/apiserver.key.31bdca25
	I0725 13:34:09.472571   61786 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/default-k8s-different-port-20220725133258-44543/proxy-client.key
	I0725 13:34:09.472770   61786 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543.pem (1338 bytes)
	W0725 13:34:09.472821   61786 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543_empty.pem, impossibly tiny 0 bytes
	I0725 13:34:09.472840   61786 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 13:34:09.472875   61786 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem (1082 bytes)
	I0725 13:34:09.472906   61786 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem (1123 bytes)
	I0725 13:34:09.472936   61786 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem (1675 bytes)
	I0725 13:34:09.473004   61786 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem (1708 bytes)
	I0725 13:34:09.473565   61786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/default-k8s-different-port-20220725133258-44543/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0725 13:34:09.490187   61786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/default-k8s-different-port-20220725133258-44543/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 13:34:09.506643   61786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/default-k8s-different-port-20220725133258-44543/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 13:34:09.523366   61786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/default-k8s-different-port-20220725133258-44543/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0725 13:34:09.539862   61786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 13:34:09.556235   61786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 13:34:09.572084   61786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 13:34:09.588997   61786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0725 13:34:09.605403   61786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem --> /usr/share/ca-certificates/445432.pem (1708 bytes)
	I0725 13:34:09.622071   61786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 13:34:09.639455   61786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543.pem --> /usr/share/ca-certificates/44543.pem (1338 bytes)
	I0725 13:34:09.666648   61786 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 13:34:09.680404   61786 ssh_runner.go:195] Run: openssl version
	I0725 13:34:09.685377   61786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/445432.pem && ln -fs /usr/share/ca-certificates/445432.pem /etc/ssl/certs/445432.pem"
	I0725 13:34:09.692933   61786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/445432.pem
	I0725 13:34:09.696819   61786 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 25 19:24 /usr/share/ca-certificates/445432.pem
	I0725 13:34:09.696867   61786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/445432.pem
	I0725 13:34:09.701960   61786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/445432.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 13:34:09.709308   61786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 13:34:09.717057   61786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 13:34:09.721219   61786 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 25 19:19 /usr/share/ca-certificates/minikubeCA.pem
	I0725 13:34:09.721287   61786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 13:34:09.726658   61786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 13:34:09.733604   61786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44543.pem && ln -fs /usr/share/ca-certificates/44543.pem /etc/ssl/certs/44543.pem"
	I0725 13:34:09.741720   61786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44543.pem
	I0725 13:34:09.745497   61786 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 25 19:24 /usr/share/ca-certificates/44543.pem
	I0725 13:34:09.745548   61786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44543.pem
	I0725 13:34:09.751361   61786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/44543.pem /etc/ssl/certs/51391683.0"
	I0725 13:34:09.758844   61786 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220725133258-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port-20220725133258-4454
3 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 13:34:09.758948   61786 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 13:34:09.788556   61786 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 13:34:09.796138   61786 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0725 13:34:09.796155   61786 kubeadm.go:626] restartCluster start
	I0725 13:34:09.796211   61786 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 13:34:09.803427   61786 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:09.803495   61786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220725133258-44543
	I0725 13:34:09.877185   61786 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220725133258-44543" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	I0725 13:34:09.877366   61786 kubeconfig.go:127] "default-k8s-different-port-20220725133258-44543" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig - will repair!
	I0725 13:34:09.877706   61786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig: {Name:mk33e723b045d7cbb2c524ed31a7e78a4a0b6415 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 13:34:09.878802   61786 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 13:34:09.886342   61786 api_server.go:165] Checking apiserver status ...
	I0725 13:34:09.886396   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:34:09.894462   61786 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:10.094989   61786 api_server.go:165] Checking apiserver status ...
	I0725 13:34:10.095125   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:34:10.104812   61786 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:10.296311   61786 api_server.go:165] Checking apiserver status ...
	I0725 13:34:10.296403   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:34:10.306824   61786 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:10.494856   61786 api_server.go:165] Checking apiserver status ...
	I0725 13:34:10.494967   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:34:10.505102   61786 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:10.696865   61786 api_server.go:165] Checking apiserver status ...
	I0725 13:34:10.697038   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:34:10.707693   61786 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:10.896785   61786 api_server.go:165] Checking apiserver status ...
	I0725 13:34:10.896969   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:34:10.907495   61786 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:11.097072   61786 api_server.go:165] Checking apiserver status ...
	I0725 13:34:11.097166   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:34:11.107646   61786 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:11.294983   61786 api_server.go:165] Checking apiserver status ...
	I0725 13:34:11.295100   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:34:11.304071   61786 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:11.496628   61786 api_server.go:165] Checking apiserver status ...
	I0725 13:34:11.496802   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:34:11.507122   61786 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:11.697167   61786 api_server.go:165] Checking apiserver status ...
	I0725 13:34:11.697382   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:34:11.708140   61786 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:11.896909   61786 api_server.go:165] Checking apiserver status ...
	I0725 13:34:11.897054   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:34:11.907309   61786 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:12.095351   61786 api_server.go:165] Checking apiserver status ...
	I0725 13:34:12.095504   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:34:12.107280   61786 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:12.297402   61786 api_server.go:165] Checking apiserver status ...
	I0725 13:34:12.297559   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:34:12.307933   61786 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:12.497420   61786 api_server.go:165] Checking apiserver status ...
	I0725 13:34:12.497620   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:34:12.509829   61786 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:12.697477   61786 api_server.go:165] Checking apiserver status ...
	I0725 13:34:12.697599   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:34:12.708129   61786 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:12.897571   61786 api_server.go:165] Checking apiserver status ...
	I0725 13:34:12.897712   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:34:12.908504   61786 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:12.908514   61786 api_server.go:165] Checking apiserver status ...
	I0725 13:34:12.908558   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:34:12.916432   61786 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:12.916446   61786 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0725 13:34:12.916453   61786 kubeadm.go:1092] stopping kube-system containers ...
	I0725 13:34:12.916512   61786 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 13:34:12.948692   61786 docker.go:443] Stopping containers: [21fb696c3038 51677fb6144c b476d4c9ea34 61aef9880797 d95939ee82e4 142e60195cf4 1a100f122ea6 381dfaca547b abf2f82e0e50 e65639a75c81 11bcb130ff7a d0a27f48f794 449a4cccfc67 5548f957dbdf 09e5b2b95ce2]
	I0725 13:34:12.948776   61786 ssh_runner.go:195] Run: docker stop 21fb696c3038 51677fb6144c b476d4c9ea34 61aef9880797 d95939ee82e4 142e60195cf4 1a100f122ea6 381dfaca547b abf2f82e0e50 e65639a75c81 11bcb130ff7a d0a27f48f794 449a4cccfc67 5548f957dbdf 09e5b2b95ce2
	I0725 13:34:12.979483   61786 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 13:34:12.989370   61786 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 13:34:12.996679   61786 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jul 25 20:33 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jul 25 20:33 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2127 Jul 25 20:33 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jul 25 20:33 /etc/kubernetes/scheduler.conf
	
	I0725 13:34:12.996731   61786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0725 13:34:13.003759   61786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0725 13:34:13.011125   61786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0725 13:34:13.018455   61786 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:13.018511   61786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 13:34:13.025510   61786 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0725 13:34:13.033159   61786 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:34:13.033202   61786 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 13:34:13.040388   61786 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 13:34:13.048073   61786 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0725 13:34:13.048082   61786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:34:13.093387   61786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:34:14.153710   61786 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.060020888s)
	I0725 13:34:14.153730   61786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:34:14.329681   61786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:34:14.375596   61786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:34:14.426060   61786 api_server.go:51] waiting for apiserver process to appear ...
	I0725 13:34:14.426130   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:34:14.937020   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:34:15.437072   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:34:15.498123   61786 api_server.go:71] duration metric: took 1.071801664s to wait for apiserver process to appear ...
	I0725 13:34:15.498156   61786 api_server.go:87] waiting for apiserver healthz status ...
	I0725 13:34:15.498176   61786 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60205/healthz ...
	I0725 13:34:15.499467   61786 api_server.go:256] stopped: https://127.0.0.1:60205/healthz: Get "https://127.0.0.1:60205/healthz": EOF
	I0725 13:34:16.000075   61786 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60205/healthz ...
	I0725 13:34:19.004558   61786 api_server.go:266] https://127.0.0.1:60205/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 13:34:19.004576   61786 api_server.go:102] status: https://127.0.0.1:60205/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 13:34:19.500619   61786 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60205/healthz ...
	I0725 13:34:19.507069   61786 api_server.go:266] https://127.0.0.1:60205/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 13:34:19.507082   61786 api_server.go:102] status: https://127.0.0.1:60205/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 13:34:20.000615   61786 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60205/healthz ...
	I0725 13:34:20.006253   61786 api_server.go:266] https://127.0.0.1:60205/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 13:34:20.006267   61786 api_server.go:102] status: https://127.0.0.1:60205/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 13:34:20.500871   61786 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60205/healthz ...
	I0725 13:34:20.506841   61786 api_server.go:266] https://127.0.0.1:60205/healthz returned 200:
	ok
	I0725 13:34:20.513394   61786 api_server.go:140] control plane version: v1.24.2
	I0725 13:34:20.513410   61786 api_server.go:130] duration metric: took 5.014188979s to wait for apiserver health ...
	I0725 13:34:20.513416   61786 cni.go:95] Creating CNI manager for ""
	I0725 13:34:20.513426   61786 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 13:34:20.513437   61786 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 13:34:20.524394   61786 system_pods.go:59] 8 kube-system pods found
	I0725 13:34:20.524410   61786 system_pods.go:61] "coredns-6d4b75cb6d-ltpwj" [43fe43ee-d181-4a21-936f-c588e810d1b8] Running
	I0725 13:34:20.524414   61786 system_pods.go:61] "etcd-default-k8s-different-port-20220725133258-44543" [e409d4c7-e1f8-4825-b013-df9d0e6680d1] Running
	I0725 13:34:20.524422   61786 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220725133258-44543" [e373ecf2-4fb2-436f-b520-e05c162005e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0725 13:34:20.524429   61786 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220725133258-44543" [2203416f-f18e-4c6c-bf8f-62fe42f5d716] Running
	I0725 13:34:20.524433   61786 system_pods.go:61] "kube-proxy-bsbv8" [00380a03-69be-4582-bc91-be2e992a8756] Running
	I0725 13:34:20.524439   61786 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220725133258-44543" [ee2345ff-7e0e-4e32-a303-ec8637f9a6e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 13:34:20.524446   61786 system_pods.go:61] "metrics-server-5c6f97fb75-dt6cw" [5f26aec3-73de-457a-ab6e-6b8db807386c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 13:34:20.524451   61786 system_pods.go:61] "storage-provisioner" [872443cb-9c58-4914-bfd8-9c919c4c2729] Running
	I0725 13:34:20.524454   61786 system_pods.go:74] duration metric: took 11.01124ms to wait for pod list to return data ...
	I0725 13:34:20.524461   61786 node_conditions.go:102] verifying NodePressure condition ...
	I0725 13:34:20.528607   61786 node_conditions.go:122] node storage ephemeral capacity is 107077304Ki
	I0725 13:34:20.528628   61786 node_conditions.go:123] node cpu capacity is 6
	I0725 13:34:20.528639   61786 node_conditions.go:105] duration metric: took 4.173368ms to run NodePressure ...
	I0725 13:34:20.528651   61786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:34:20.692164   61786 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0725 13:34:20.699554   61786 kubeadm.go:777] kubelet initialised
	I0725 13:34:20.699567   61786 kubeadm.go:778] duration metric: took 7.38462ms waiting for restarted kubelet to initialise ...
	I0725 13:34:20.699575   61786 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 13:34:20.706455   61786 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-ltpwj" in "kube-system" namespace to be "Ready" ...
	I0725 13:34:20.713491   61786 pod_ready.go:92] pod "coredns-6d4b75cb6d-ltpwj" in "kube-system" namespace has status "Ready":"True"
	I0725 13:34:20.713502   61786 pod_ready.go:81] duration metric: took 7.031927ms waiting for pod "coredns-6d4b75cb6d-ltpwj" in "kube-system" namespace to be "Ready" ...
	I0725 13:34:20.713509   61786 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:34:20.720234   61786 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:34:20.720245   61786 pod_ready.go:81] duration metric: took 6.729713ms waiting for pod "etcd-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:34:20.720266   61786 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:34:22.736135   61786 pod_ready.go:102] pod "kube-apiserver-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace has status "Ready":"False"
	I0725 13:34:24.737145   61786 pod_ready.go:102] pod "kube-apiserver-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace has status "Ready":"False"
	I0725 13:34:26.739266   61786 pod_ready.go:102] pod "kube-apiserver-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace has status "Ready":"False"
	I0725 13:34:28.739639   61786 pod_ready.go:102] pod "kube-apiserver-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace has status "Ready":"False"
	I0725 13:34:30.237640   61786 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:34:30.237653   61786 pod_ready.go:81] duration metric: took 9.516001331s waiting for pod "kube-apiserver-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:34:30.237660   61786 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:34:30.747406   61786 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:34:30.747419   61786 pod_ready.go:81] duration metric: took 509.699097ms waiting for pod "kube-controller-manager-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:34:30.747427   61786 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bsbv8" in "kube-system" namespace to be "Ready" ...
	I0725 13:34:30.752023   61786 pod_ready.go:92] pod "kube-proxy-bsbv8" in "kube-system" namespace has status "Ready":"True"
	I0725 13:34:30.752032   61786 pod_ready.go:81] duration metric: took 4.600741ms waiting for pod "kube-proxy-bsbv8" in "kube-system" namespace to be "Ready" ...
	I0725 13:34:30.752038   61786 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:34:30.756171   61786 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:34:30.756179   61786 pod_ready.go:81] duration metric: took 4.135517ms waiting for pod "kube-scheduler-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:34:30.756185   61786 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace to be "Ready" ...
	I0725 13:34:32.769583   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:34:35.266268   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:34:37.269544   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:34:39.767177   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:34:42.270129   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:34:44.770171   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:34:47.266382   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:34:49.270908   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:34:51.766788   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:34:53.770199   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:34:56.267530   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:34:58.270364   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:00.271370   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:02.770491   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:05.267811   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:07.268019   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:09.269175   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:11.771924   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:14.268303   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:16.269490   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:18.269647   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:20.269732   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:22.770167   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:25.272345   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:27.768425   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:29.772730   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:32.269716   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:34.272269   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:36.769112   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:38.770762   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:41.269690   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:43.270881   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:45.770163   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:47.770257   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:49.772467   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:52.271997   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:54.770220   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:56.770972   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:35:58.772954   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:01.271748   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:03.769893   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:05.771682   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:08.272756   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:10.772210   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:13.269694   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:15.271259   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:17.271758   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:19.771816   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:22.271153   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:24.273500   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:26.771501   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:28.772146   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:30.773207   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:33.272043   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:35.773055   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:38.271505   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:40.771959   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:43.271115   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:45.272040   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:47.272525   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:49.771562   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:52.272147   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:54.273010   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:56.274371   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:36:58.774235   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:01.274355   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:03.773714   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:05.773848   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:07.774416   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:10.272739   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:12.273147   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:14.774766   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:17.272082   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:19.273723   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:21.275454   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:23.774025   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:25.774734   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:28.275012   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:30.275495   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:32.775435   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:35.273955   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:37.773340   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:40.273624   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:42.275705   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:44.772990   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:46.776013   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:49.275522   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:51.776201   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:54.272799   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:56.276129   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:37:58.776449   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:38:01.276937   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:38:03.775275   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:38:05.776601   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:38:08.275066   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:38:10.773865   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:38:12.777289   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:38:15.276931   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:38:17.277015   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:38:19.777784   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:38:22.274664   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:38:24.277500   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:38:26.777501   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:38:28.777629   61786 pod_ready.go:102] pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace has status "Ready":"False"
	I0725 13:38:30.768525   61786 pod_ready.go:81] duration metric: took 4m0.003905943s waiting for pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace to be "Ready" ...
	E0725 13:38:30.768539   61786 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-5c6f97fb75-dt6cw" in "kube-system" namespace to be "Ready" (will not retry!)
	I0725 13:38:30.768550   61786 pod_ready.go:38] duration metric: took 4m10.059123063s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 13:38:30.768619   61786 kubeadm.go:630] restartCluster took 4m20.959894497s
	W0725 13:38:30.768693   61786 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0725 13:38:30.768708   61786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0725 13:38:33.097038   61786 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (2.328248172s)
	I0725 13:38:33.097098   61786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 13:38:33.106317   61786 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 13:38:33.113479   61786 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0725 13:38:33.113523   61786 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 13:38:33.120573   61786 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 13:38:33.120592   61786 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0725 13:38:33.404685   61786 out.go:204]   - Generating certificates and keys ...
	I0725 13:38:34.559189   61786 out.go:204]   - Booting up control plane ...
	I0725 13:38:41.609910   61786 out.go:204]   - Configuring RBAC rules ...
	I0725 13:38:41.984727   61786 cni.go:95] Creating CNI manager for ""
	I0725 13:38:41.984743   61786 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 13:38:41.984776   61786 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 13:38:41.984845   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:41.984852   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=a5b59bcfc16aadb787d3d4f0635e06172b98dce6 minikube.k8s.io/name=default-k8s-different-port-20220725133258-44543 minikube.k8s.io/updated_at=2022_07_25T13_38_41_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:41.995147   61786 ops.go:34] apiserver oom_adj: -16
	I0725 13:38:42.131241   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:42.687779   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:43.189737   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:43.689797   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:44.188581   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:44.688143   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:45.189843   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:45.688099   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:46.189813   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:46.689855   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:47.189263   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:47.689377   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:48.188105   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:48.688418   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:49.187882   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:49.689992   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:50.188492   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:50.689222   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:51.190147   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:51.688570   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:52.188695   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:52.688302   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:53.189368   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:53.688182   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:54.188476   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:54.688006   61786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 13:38:54.741570   61786 kubeadm.go:1045] duration metric: took 12.756406696s to wait for elevateKubeSystemPrivileges.
	I0725 13:38:54.741586   61786 kubeadm.go:397] StartCluster complete in 4m44.96945209s
	I0725 13:38:54.741601   61786 settings.go:142] acquiring lock: {Name:mk9b5e011e5806157f9a122f8c65a6cc16a3d9e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 13:38:54.741678   61786 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	I0725 13:38:54.742213   61786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig: {Name:mk33e723b045d7cbb2c524ed31a7e78a4a0b6415 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 13:38:55.258629   61786 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220725133258-44543" rescaled to 1
	I0725 13:38:55.258668   61786 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 13:38:55.258680   61786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0725 13:38:55.258695   61786 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0725 13:38:55.282860   61786 out.go:177] * Verifying Kubernetes components...
	I0725 13:38:55.258826   61786 config.go:178] Loaded profile config "default-k8s-different-port-20220725133258-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0725 13:38:55.282931   61786 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220725133258-44543"
	I0725 13:38:55.282932   61786 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220725133258-44543"
	I0725 13:38:55.282939   61786 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220725133258-44543"
	I0725 13:38:55.282940   61786 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220725133258-44543"
	I0725 13:38:55.314162   61786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0725 13:38:55.355956   61786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 13:38:55.355963   61786 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220725133258-44543"
	I0725 13:38:55.355964   61786 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220725133258-44543"
	I0725 13:38:55.355964   61786 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220725133258-44543"
	I0725 13:38:55.355960   61786 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220725133258-44543"
	W0725 13:38:55.355985   61786 addons.go:162] addon dashboard should already be in state true
	W0725 13:38:55.355980   61786 addons.go:162] addon storage-provisioner should already be in state true
	W0725 13:38:55.355972   61786 addons.go:162] addon metrics-server should already be in state true
	I0725 13:38:55.356030   61786 host.go:66] Checking if "default-k8s-different-port-20220725133258-44543" exists ...
	I0725 13:38:55.356035   61786 host.go:66] Checking if "default-k8s-different-port-20220725133258-44543" exists ...
	I0725 13:38:55.356040   61786 host.go:66] Checking if "default-k8s-different-port-20220725133258-44543" exists ...
	I0725 13:38:55.356345   61786 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220725133258-44543 --format={{.State.Status}}
	I0725 13:38:55.356457   61786 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220725133258-44543 --format={{.State.Status}}
	I0725 13:38:55.356516   61786 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220725133258-44543 --format={{.State.Status}}
	I0725 13:38:55.357205   61786 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220725133258-44543 --format={{.State.Status}}
	I0725 13:38:55.377760   61786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220725133258-44543
	I0725 13:38:55.503130   61786 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220725133258-44543"
	I0725 13:38:55.522805   61786 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0725 13:38:55.522844   61786 addons.go:162] addon default-storageclass should already be in state true
	I0725 13:38:55.601767   61786 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0725 13:38:55.544055   61786 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 13:38:55.544114   61786 host.go:66] Checking if "default-k8s-different-port-20220725133258-44543" exists ...
	I0725 13:38:55.580996   61786 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0725 13:38:55.598483   61786 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220725133258-44543" to be "Ready" ...
	I0725 13:38:55.622916   61786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 13:38:55.622930   61786 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0725 13:38:55.622939   61786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0725 13:38:55.623010   61786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725133258-44543
	I0725 13:38:55.623338   61786 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220725133258-44543 --format={{.State.Status}}
	I0725 13:38:55.626513   61786 node_ready.go:49] node "default-k8s-different-port-20220725133258-44543" has status "Ready":"True"
	I0725 13:38:55.680887   61786 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0725 13:38:55.659957   61786 node_ready.go:38] duration metric: took 37.0254ms waiting for node "default-k8s-different-port-20220725133258-44543" to be "Ready" ...
	I0725 13:38:55.660000   61786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725133258-44543
	I0725 13:38:55.718012   61786 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 13:38:55.718127   61786 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0725 13:38:55.718145   61786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0725 13:38:55.718254   61786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725133258-44543
	I0725 13:38:55.729313   61786 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-krc4w" in "kube-system" namespace to be "Ready" ...
	I0725 13:38:55.755052   61786 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 13:38:55.755074   61786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 13:38:55.755201   61786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220725133258-44543
	I0725 13:38:55.759149   61786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60201 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/default-k8s-different-port-20220725133258-44543/id_rsa Username:docker}
	I0725 13:38:55.819926   61786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60201 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/default-k8s-different-port-20220725133258-44543/id_rsa Username:docker}
	I0725 13:38:55.824327   61786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60201 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/default-k8s-different-port-20220725133258-44543/id_rsa Username:docker}
	I0725 13:38:55.852585   61786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60201 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/default-k8s-different-port-20220725133258-44543/id_rsa Username:docker}
	I0725 13:38:55.911420   61786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 13:38:55.930541   61786 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0725 13:38:55.930555   61786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0725 13:38:55.947415   61786 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0725 13:38:55.947430   61786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0725 13:38:56.015152   61786 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 13:38:56.015187   61786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0725 13:38:56.024307   61786 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0725 13:38:56.024322   61786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0725 13:38:56.037145   61786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 13:38:56.128060   61786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 13:38:56.208962   61786 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0725 13:38:56.208980   61786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0725 13:38:56.314257   61786 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0725 13:38:56.314275   61786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0725 13:38:56.500909   61786 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0725 13:38:56.500925   61786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0725 13:38:56.505830   61786 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.149864883s)
	I0725 13:38:56.505848   61786 start.go:809] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0725 13:38:56.539806   61786 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0725 13:38:56.539822   61786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0725 13:38:56.630424   61786 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0725 13:38:56.630457   61786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0725 13:38:56.706962   61786 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0725 13:38:56.706979   61786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0725 13:38:56.735501   61786 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0725 13:38:56.735519   61786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0725 13:38:56.740969   61786 pod_ready.go:97] error getting pod "coredns-6d4b75cb6d-krc4w" in "kube-system" namespace (skipping!): pods "coredns-6d4b75cb6d-krc4w" not found
	I0725 13:38:56.740985   61786 pod_ready.go:81] duration metric: took 1.011621188s waiting for pod "coredns-6d4b75cb6d-krc4w" in "kube-system" namespace to be "Ready" ...
	E0725 13:38:56.740999   61786 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-6d4b75cb6d-krc4w" in "kube-system" namespace (skipping!): pods "coredns-6d4b75cb6d-krc4w" not found
	I0725 13:38:56.741009   61786 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-whj7v" in "kube-system" namespace to be "Ready" ...
	I0725 13:38:56.818086   61786 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0725 13:38:56.818101   61786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0725 13:38:56.844768   61786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0725 13:38:56.929491   61786 addons.go:383] Verifying addon metrics-server=true in "default-k8s-different-port-20220725133258-44543"
	I0725 13:38:57.767005   61786 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0725 13:38:57.841616   61786 addons.go:414] enableAddons completed in 2.582848432s
	I0725 13:38:58.756866   61786 pod_ready.go:102] pod "coredns-6d4b75cb6d-whj7v" in "kube-system" namespace has status "Ready":"False"
	I0725 13:39:00.755719   61786 pod_ready.go:92] pod "coredns-6d4b75cb6d-whj7v" in "kube-system" namespace has status "Ready":"True"
	I0725 13:39:00.755735   61786 pod_ready.go:81] duration metric: took 4.014600528s waiting for pod "coredns-6d4b75cb6d-whj7v" in "kube-system" namespace to be "Ready" ...
	I0725 13:39:00.755745   61786 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:39:00.762033   61786 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:39:00.762042   61786 pod_ready.go:81] duration metric: took 6.291089ms waiting for pod "etcd-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:39:00.762049   61786 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:39:00.767591   61786 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:39:00.767601   61786 pod_ready.go:81] duration metric: took 5.547326ms waiting for pod "kube-apiserver-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:39:00.767610   61786 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:39:00.777675   61786 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:39:00.777686   61786 pod_ready.go:81] duration metric: took 10.069146ms waiting for pod "kube-controller-manager-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:39:00.777694   61786 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pdsqs" in "kube-system" namespace to be "Ready" ...
	I0725 13:39:00.783826   61786 pod_ready.go:92] pod "kube-proxy-pdsqs" in "kube-system" namespace has status "Ready":"True"
	I0725 13:39:00.783835   61786 pod_ready.go:81] duration metric: took 6.136533ms waiting for pod "kube-proxy-pdsqs" in "kube-system" namespace to be "Ready" ...
	I0725 13:39:00.783841   61786 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:39:01.152734   61786 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace has status "Ready":"True"
	I0725 13:39:01.152745   61786 pod_ready.go:81] duration metric: took 368.887729ms waiting for pod "kube-scheduler-default-k8s-different-port-20220725133258-44543" in "kube-system" namespace to be "Ready" ...
	I0725 13:39:01.152751   61786 pod_ready.go:38] duration metric: took 5.434537401s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 13:39:01.152763   61786 api_server.go:51] waiting for apiserver process to appear ...
	I0725 13:39:01.152815   61786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:39:01.178939   61786 api_server.go:71] duration metric: took 5.920074581s to wait for apiserver process to appear ...
	I0725 13:39:01.178955   61786 api_server.go:87] waiting for apiserver healthz status ...
	I0725 13:39:01.178962   61786 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60205/healthz ...
	I0725 13:39:01.184599   61786 api_server.go:266] https://127.0.0.1:60205/healthz returned 200:
	ok
	I0725 13:39:01.185840   61786 api_server.go:140] control plane version: v1.24.2
	I0725 13:39:01.185848   61786 api_server.go:130] duration metric: took 6.888886ms to wait for apiserver health ...
	I0725 13:39:01.185853   61786 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 13:39:01.356451   61786 system_pods.go:59] 8 kube-system pods found
	I0725 13:39:01.356466   61786 system_pods.go:61] "coredns-6d4b75cb6d-whj7v" [ee95aea1-d131-4524-a2e1-04d0c4da8e20] Running
	I0725 13:39:01.356470   61786 system_pods.go:61] "etcd-default-k8s-different-port-20220725133258-44543" [9971c3fd-8bc1-4799-825c-47d542d172cd] Running
	I0725 13:39:01.356474   61786 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220725133258-44543" [003259d7-d067-4c90-b5bf-34a9c60d430c] Running
	I0725 13:39:01.356477   61786 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220725133258-44543" [c292e2ae-00d3-48c2-8d9a-e06a2301d358] Running
	I0725 13:39:01.356483   61786 system_pods.go:61] "kube-proxy-pdsqs" [ab647055-f1f8-4144-a7a2-1d7a7da1e1cf] Running
	I0725 13:39:01.356496   61786 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220725133258-44543" [2c7cb5cc-4c11-4aee-a9c7-9e657d1b3610] Running
	I0725 13:39:01.356502   61786 system_pods.go:61] "metrics-server-5c6f97fb75-6tbqr" [d56ebb3b-88b9-4c7b-a6fe-1d145d3d62e9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 13:39:01.356511   61786 system_pods.go:61] "storage-provisioner" [a99d3e3f-11b6-4b57-9e40-e684accad53d] Running
	I0725 13:39:01.356515   61786 system_pods.go:74] duration metric: took 170.651524ms to wait for pod list to return data ...
	I0725 13:39:01.356521   61786 default_sa.go:34] waiting for default service account to be created ...
	I0725 13:39:01.553425   61786 default_sa.go:45] found service account: "default"
	I0725 13:39:01.553439   61786 default_sa.go:55] duration metric: took 196.906744ms for default service account to be created ...
	I0725 13:39:01.553446   61786 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 13:39:01.754454   61786 system_pods.go:86] 8 kube-system pods found
	I0725 13:39:01.754469   61786 system_pods.go:89] "coredns-6d4b75cb6d-whj7v" [ee95aea1-d131-4524-a2e1-04d0c4da8e20] Running
	I0725 13:39:01.754473   61786 system_pods.go:89] "etcd-default-k8s-different-port-20220725133258-44543" [9971c3fd-8bc1-4799-825c-47d542d172cd] Running
	I0725 13:39:01.754477   61786 system_pods.go:89] "kube-apiserver-default-k8s-different-port-20220725133258-44543" [003259d7-d067-4c90-b5bf-34a9c60d430c] Running
	I0725 13:39:01.754481   61786 system_pods.go:89] "kube-controller-manager-default-k8s-different-port-20220725133258-44543" [c292e2ae-00d3-48c2-8d9a-e06a2301d358] Running
	I0725 13:39:01.754484   61786 system_pods.go:89] "kube-proxy-pdsqs" [ab647055-f1f8-4144-a7a2-1d7a7da1e1cf] Running
	I0725 13:39:01.754488   61786 system_pods.go:89] "kube-scheduler-default-k8s-different-port-20220725133258-44543" [2c7cb5cc-4c11-4aee-a9c7-9e657d1b3610] Running
	I0725 13:39:01.754496   61786 system_pods.go:89] "metrics-server-5c6f97fb75-6tbqr" [d56ebb3b-88b9-4c7b-a6fe-1d145d3d62e9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 13:39:01.754500   61786 system_pods.go:89] "storage-provisioner" [a99d3e3f-11b6-4b57-9e40-e684accad53d] Running
	I0725 13:39:01.754505   61786 system_pods.go:126] duration metric: took 201.049861ms to wait for k8s-apps to be running ...
	I0725 13:39:01.754512   61786 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 13:39:01.754564   61786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 13:39:01.765642   61786 system_svc.go:56] duration metric: took 11.126618ms WaitForService to wait for kubelet.
	I0725 13:39:01.765659   61786 kubeadm.go:572] duration metric: took 6.506780003s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0725 13:39:01.765680   61786 node_conditions.go:102] verifying NodePressure condition ...
	I0725 13:39:01.952036   61786 node_conditions.go:122] node storage ephemeral capacity is 107077304Ki
	I0725 13:39:01.952050   61786 node_conditions.go:123] node cpu capacity is 6
	I0725 13:39:01.952056   61786 node_conditions.go:105] duration metric: took 186.353687ms to run NodePressure ...
	I0725 13:39:01.952064   61786 start.go:216] waiting for startup goroutines ...
	I0725 13:39:01.984984   61786 start.go:506] kubectl: 1.24.1, cluster: 1.24.2 (minor skew: 0)
	I0725 13:39:02.007662   61786 out.go:177] * Done! kubectl is now configured to use "default-k8s-different-port-20220725133258-44543" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Mon 2022-07-25 20:34:06 UTC, end at Mon 2022-07-25 20:39:58 UTC. --
	Jul 25 20:38:32 default-k8s-different-port-20220725133258-44543 dockerd[511]: time="2022-07-25T20:38:32.111076809Z" level=info msg="ignoring event" container=dd7a0ff0a2e48b9846aa099283eeac08074b71a4e94c8f84d3620802af731e44 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:38:32 default-k8s-different-port-20220725133258-44543 dockerd[511]: time="2022-07-25T20:38:32.178903654Z" level=info msg="ignoring event" container=845949fda78bed43cb8c5994005f590bca7ac0830c92d42a03b27ab0fef66f27 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:38:32 default-k8s-different-port-20220725133258-44543 dockerd[511]: time="2022-07-25T20:38:32.243971301Z" level=info msg="ignoring event" container=e90bb6e19089f3c4258acfd3a71f90956f641d56a20d2601ab9428f766327432 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:38:32 default-k8s-different-port-20220725133258-44543 dockerd[511]: time="2022-07-25T20:38:32.310152312Z" level=info msg="ignoring event" container=c8e1d3b2d1d6656576e473940dae274e7234ab61c29132ced7c0923b596e690c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:38:32 default-k8s-different-port-20220725133258-44543 dockerd[511]: time="2022-07-25T20:38:32.390719334Z" level=info msg="ignoring event" container=393b69df1d6b40d6275bca11660c3304940bbe503a0ddd7d680fb10c1899f6f9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:38:32 default-k8s-different-port-20220725133258-44543 dockerd[511]: time="2022-07-25T20:38:32.458214100Z" level=info msg="ignoring event" container=c3157f62a5c25544457b46deb9ab6075345b1560a91925f4324144581034a0f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:38:32 default-k8s-different-port-20220725133258-44543 dockerd[511]: time="2022-07-25T20:38:32.574054948Z" level=info msg="ignoring event" container=179f3e9d56294f3398f8295ba86abb6973eff67c719c3e23af9a1b952e23ca79 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:38:32 default-k8s-different-port-20220725133258-44543 dockerd[511]: time="2022-07-25T20:38:32.637501332Z" level=info msg="ignoring event" container=313d52e40630bbe76b1d83bee9325b607ed310383148576b61095113107e2c49 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:38:32 default-k8s-different-port-20220725133258-44543 dockerd[511]: time="2022-07-25T20:38:32.723857430Z" level=info msg="ignoring event" container=42e0c857bff2751624cf59c62181e9a07194d6a1dc7084b6eceb87e27aa6dfe8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:38:55 default-k8s-different-port-20220725133258-44543 dockerd[511]: time="2022-07-25T20:38:55.220746966Z" level=info msg="ignoring event" container=09e496a283e3161035a3e331f928d8cfcbc2c9d828c344bd38ada8728c02a5a5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:38:58 default-k8s-different-port-20220725133258-44543 dockerd[511]: time="2022-07-25T20:38:58.070318985Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 20:38:58 default-k8s-different-port-20220725133258-44543 dockerd[511]: time="2022-07-25T20:38:58.070344443Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 20:38:58 default-k8s-different-port-20220725133258-44543 dockerd[511]: time="2022-07-25T20:38:58.071489449Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 20:38:59 default-k8s-different-port-20220725133258-44543 dockerd[511]: time="2022-07-25T20:38:59.265533438Z" level=warning msg="reference for unknown type: " digest="sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3" remote="docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3"
	Jul 25 20:39:04 default-k8s-different-port-20220725133258-44543 dockerd[511]: time="2022-07-25T20:39:04.835967380Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Jul 25 20:39:05 default-k8s-different-port-20220725133258-44543 dockerd[511]: time="2022-07-25T20:39:05.137907004Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Jul 25 20:39:08 default-k8s-different-port-20220725133258-44543 dockerd[511]: time="2022-07-25T20:39:08.519826084Z" level=info msg="ignoring event" container=28616caa6bd441c623c10d57da94a030b55c88dd7b42c14ca61630f4cf26e4eb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:39:09 default-k8s-different-port-20220725133258-44543 dockerd[511]: time="2022-07-25T20:39:09.271561451Z" level=info msg="ignoring event" container=73a466ed4aae161455e2519979dbf041a8fe4764d65b6e5888b5b9d954c98d64 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:39:13 default-k8s-different-port-20220725133258-44543 dockerd[511]: time="2022-07-25T20:39:13.058648295Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 20:39:13 default-k8s-different-port-20220725133258-44543 dockerd[511]: time="2022-07-25T20:39:13.058740049Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 20:39:13 default-k8s-different-port-20220725133258-44543 dockerd[511]: time="2022-07-25T20:39:13.059903138Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 20:39:55 default-k8s-different-port-20220725133258-44543 dockerd[511]: time="2022-07-25T20:39:55.608968767Z" level=info msg="ignoring event" container=b4aeec6683ebdbfae89eeb7c099569b04320b816bf64672f1820459dbaf1d458 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:39:55 default-k8s-different-port-20220725133258-44543 dockerd[511]: time="2022-07-25T20:39:55.648542714Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 20:39:55 default-k8s-different-port-20220725133258-44543 dockerd[511]: time="2022-07-25T20:39:55.648764097Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 25 20:39:55 default-k8s-different-port-20220725133258-44543 dockerd[511]: time="2022-07-25T20:39:55.650449011Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	b4aeec6683ebd       a90209bb39e3d                                                                                    3 seconds ago        Exited              dashboard-metrics-scraper   2                   b0e71245e542b
	1ed4c1abade58       kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3   54 seconds ago       Running             kubernetes-dashboard        0                   41f7b8202d563
	a83ee8e474c19       6e38f40d628db                                                                                    About a minute ago   Running             storage-provisioner         0                   82596d03739af
	08fc61a891721       a4ca41631cc7a                                                                                    About a minute ago   Running             coredns                     0                   b2344f0b5438f
	8115f30a19008       a634548d10b03                                                                                    About a minute ago   Running             kube-proxy                  0                   f5ecaa90b9ca4
	bff25f1c21fc0       d3377ffb7177c                                                                                    About a minute ago   Running             kube-apiserver              0                   665915a11c73c
	f26823dfc5081       aebe758cef4cd                                                                                    About a minute ago   Running             etcd                        0                   7a7dc57d5af33
	6eb33669f1b14       34cdf99b1bb3b                                                                                    About a minute ago   Running             kube-controller-manager     0                   dc64aac1af48a
	b785402e83334       5d725196c1f47                                                                                    About a minute ago   Running             kube-scheduler              0                   49d920006ee98
	
	* 
	* ==> coredns [08fc61a89172] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20220725133258-44543
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20220725133258-44543
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a5b59bcfc16aadb787d3d4f0635e06172b98dce6
	                    minikube.k8s.io/name=default-k8s-different-port-20220725133258-44543
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_07_25T13_38_41_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 25 Jul 2022 20:38:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20220725133258-44543
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 25 Jul 2022 20:39:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 25 Jul 2022 20:39:51 +0000   Mon, 25 Jul 2022 20:38:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 25 Jul 2022 20:39:51 +0000   Mon, 25 Jul 2022 20:38:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 25 Jul 2022 20:39:51 +0000   Mon, 25 Jul 2022 20:38:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 25 Jul 2022 20:39:51 +0000   Mon, 25 Jul 2022 20:38:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-different-port-20220725133258-44543
	Capacity:
	  cpu:                6
	  ephemeral-storage:  107077304Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  107077304Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 855c6c72c86b4657b3d8c3c774fd7e1d
	  System UUID:                4c41b13b-a64f-4800-b37e-f3f5767d3eeb
	  Boot ID:                    f0b8333d-7eba-4eec-b9ed-6046a2426c0d
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.2
	  Kube-Proxy Version:         v1.24.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-whj7v                                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     64s
	  kube-system                 etcd-default-k8s-different-port-20220725133258-44543                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         76s
	  kube-system                 kube-apiserver-default-k8s-different-port-20220725133258-44543             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 kube-controller-manager-default-k8s-different-port-20220725133258-44543    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 kube-proxy-pdsqs                                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	  kube-system                 kube-scheduler-default-k8s-different-port-20220725133258-44543             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 metrics-server-5c6f97fb75-6tbqr                                            100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         62s
	  kube-system                 storage-provisioner                                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kubernetes-dashboard        dashboard-metrics-scraper-dffd48c4c-nz9jv                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	  kubernetes-dashboard        kubernetes-dashboard-5fd5574d9f-7tp4v                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 62s   kube-proxy       
	  Normal  Starting                 77s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  77s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  76s   kubelet          Node default-k8s-different-port-20220725133258-44543 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    76s   kubelet          Node default-k8s-different-port-20220725133258-44543 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     76s   kubelet          Node default-k8s-different-port-20220725133258-44543 status is now: NodeHasSufficientPID
	  Normal  NodeReady                66s   kubelet          Node default-k8s-different-port-20220725133258-44543 status is now: NodeReady
	  Normal  RegisteredNode           65s   node-controller  Node default-k8s-different-port-20220725133258-44543 event: Registered Node default-k8s-different-port-20220725133258-44543 in Controller
	  Normal  Starting                 7s    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7s    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7s    kubelet          Node default-k8s-different-port-20220725133258-44543 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s    kubelet          Node default-k8s-different-port-20220725133258-44543 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s    kubelet          Node default-k8s-different-port-20220725133258-44543 status is now: NodeHasSufficientPID
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [f26823dfc508] <==
	* {"level":"info","ts":"2022-07-25T20:38:36.042Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2022-07-25T20:38:36.044Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-07-25T20:38:36.044Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-25T20:38:36.044Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-25T20:38:36.044Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-07-25T20:38:36.044Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-07-25T20:38:36.437Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2022-07-25T20:38:36.438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-07-25T20:38:36.438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2022-07-25T20:38:36.438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2022-07-25T20:38:36.438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-07-25T20:38:36.438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2022-07-25T20:38:36.438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-07-25T20:38:36.438Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T20:38:36.439Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T20:38:36.439Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T20:38:36.439Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T20:38:36.439Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:default-k8s-different-port-20220725133258-44543 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-25T20:38:36.439Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-25T20:38:36.439Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-25T20:38:36.440Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2022-07-25T20:38:36.440Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-25T20:38:36.440Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-07-25T20:38:36.458Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-07-25T20:39:55.771Z","caller":"traceutil/trace.go:171","msg":"trace[921325420] transaction","detail":"{read_only:false; response_revision:561; number_of_response:1; }","duration":"116.499464ms","start":"2022-07-25T20:39:55.654Z","end":"2022-07-25T20:39:55.771Z","steps":["trace[921325420] 'process raft request'  (duration: 78.592549ms)","trace[921325420] 'compare'  (duration: 37.504418ms)"],"step_count":2}
	
	* 
	* ==> kernel <==
	*  20:39:59 up  1:21,  0 users,  load average: 0.60, 0.86, 1.04
	Linux default-k8s-different-port-20220725133258-44543 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [bff25f1c21fc] <==
	* I0725 20:38:40.934763       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0725 20:38:41.798602       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0725 20:38:41.803847       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0725 20:38:41.813461       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0725 20:38:41.898584       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0725 20:38:54.639342       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0725 20:38:54.690800       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0725 20:38:56.203335       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0725 20:38:56.919007       1 alloc.go:327] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.96.104.185]
	I0725 20:38:57.723139       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.98.150.230]
	I0725 20:38:57.733509       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.109.118.123]
	W0725 20:38:57.738239       1 handler_proxy.go:102] no RequestInfo found in the context
	E0725 20:38:57.738672       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0725 20:38:57.738749       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0725 20:38:57.738687       1 handler_proxy.go:102] no RequestInfo found in the context
	E0725 20:38:57.738998       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0725 20:38:57.740807       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0725 20:39:57.697917       1 handler_proxy.go:102] no RequestInfo found in the context
	E0725 20:39:57.698027       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0725 20:39:57.698041       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0725 20:39:57.699979       1 handler_proxy.go:102] no RequestInfo found in the context
	E0725 20:39:57.700040       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0725 20:39:57.700049       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [6eb33669f1b1] <==
	* I0725 20:38:54.792048       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-krc4w"
	I0725 20:38:54.799830       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-whj7v"
	I0725 20:38:54.817070       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-6d4b75cb6d-krc4w"
	I0725 20:38:56.739740       1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-5c6f97fb75 to 1"
	I0725 20:38:56.742598       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-5c6f97fb75-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0725 20:38:56.747741       1 replica_set.go:550] sync "kube-system/metrics-server-5c6f97fb75" failed with pods "metrics-server-5c6f97fb75-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0725 20:38:56.806125       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c6f97fb75-6tbqr"
	I0725 20:38:57.628213       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-dffd48c4c to 1"
	I0725 20:38:57.633566       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0725 20:38:57.636299       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-5fd5574d9f to 1"
	E0725 20:38:57.638508       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0725 20:38:57.641539       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0725 20:38:57.643379       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0725 20:38:57.643493       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0725 20:38:57.647538       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0725 20:38:57.650992       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0725 20:38:57.651053       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0725 20:38:57.654821       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0725 20:38:57.654996       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0725 20:38:57.658762       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0725 20:38:57.658815       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0725 20:38:57.668216       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5fd5574d9f-7tp4v"
	I0725 20:38:57.704905       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-dffd48c4c-nz9jv"
	E0725 20:39:51.582855       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0725 20:39:51.590715       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [8115f30a1900] <==
	* I0725 20:38:55.953417       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0725 20:38:55.953700       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0725 20:38:55.953907       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0725 20:38:56.128544       1 server_others.go:206] "Using iptables Proxier"
	I0725 20:38:56.129123       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0725 20:38:56.129204       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0725 20:38:56.129217       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0725 20:38:56.129407       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0725 20:38:56.139319       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0725 20:38:56.139532       1 server.go:661] "Version info" version="v1.24.2"
	I0725 20:38:56.139559       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 20:38:56.200739       1 config.go:444] "Starting node config controller"
	I0725 20:38:56.200768       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0725 20:38:56.201365       1 config.go:317] "Starting service config controller"
	I0725 20:38:56.201372       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0725 20:38:56.201385       1 config.go:226] "Starting endpoint slice config controller"
	I0725 20:38:56.201387       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0725 20:38:56.301456       1 shared_informer.go:262] Caches are synced for node config
	I0725 20:38:56.301616       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0725 20:38:56.301651       1 shared_informer.go:262] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [b785402e8333] <==
	* W0725 20:38:38.828092       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0725 20:38:38.828100       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0725 20:38:38.828211       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0725 20:38:38.828223       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0725 20:38:38.828494       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0725 20:38:38.828531       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0725 20:38:38.828610       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0725 20:38:38.828610       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0725 20:38:38.828621       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0725 20:38:38.828625       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0725 20:38:38.829033       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0725 20:38:38.829066       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0725 20:38:38.829035       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0725 20:38:38.829080       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0725 20:38:39.654638       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0725 20:38:39.654660       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0725 20:38:39.683556       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0725 20:38:39.683606       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0725 20:38:39.737993       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0725 20:38:39.738046       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0725 20:38:39.894003       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0725 20:38:39.894076       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0725 20:38:39.899098       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0725 20:38:39.899134       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0725 20:38:41.560715       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2022-07-25 20:34:06 UTC, end at Mon 2022-07-25 20:40:00 UTC. --
	Jul 25 20:39:52 default-k8s-different-port-20220725133258-44543 kubelet[9591]: I0725 20:39:52.986671    9591 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a99d3e3f-11b6-4b57-9e40-e684accad53d-tmp\") pod \"storage-provisioner\" (UID: \"a99d3e3f-11b6-4b57-9e40-e684accad53d\") " pod="kube-system/storage-provisioner"
	Jul 25 20:39:52 default-k8s-different-port-20220725133258-44543 kubelet[9591]: I0725 20:39:52.986687    9591 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fc5b8eee-f995-4dd2-9453-12fa2acc79d8-tmp-volume\") pod \"dashboard-metrics-scraper-dffd48c4c-nz9jv\" (UID: \"fc5b8eee-f995-4dd2-9453-12fa2acc79d8\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-nz9jv"
	Jul 25 20:39:52 default-k8s-different-port-20220725133258-44543 kubelet[9591]: I0725 20:39:52.986734    9591 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/03cb3fda-d35f-4d4f-824b-390bfded730d-tmp-volume\") pod \"kubernetes-dashboard-5fd5574d9f-7tp4v\" (UID: \"03cb3fda-d35f-4d4f-824b-390bfded730d\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-7tp4v"
	Jul 25 20:39:52 default-k8s-different-port-20220725133258-44543 kubelet[9591]: I0725 20:39:52.986925    9591 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d56ebb3b-88b9-4c7b-a6fe-1d145d3d62e9-tmp-dir\") pod \"metrics-server-5c6f97fb75-6tbqr\" (UID: \"d56ebb3b-88b9-4c7b-a6fe-1d145d3d62e9\") " pod="kube-system/metrics-server-5c6f97fb75-6tbqr"
	Jul 25 20:39:52 default-k8s-different-port-20220725133258-44543 kubelet[9591]: I0725 20:39:52.986950    9591 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krc5g\" (UniqueName: \"kubernetes.io/projected/ab647055-f1f8-4144-a7a2-1d7a7da1e1cf-kube-api-access-krc5g\") pod \"kube-proxy-pdsqs\" (UID: \"ab647055-f1f8-4144-a7a2-1d7a7da1e1cf\") " pod="kube-system/kube-proxy-pdsqs"
	Jul 25 20:39:52 default-k8s-different-port-20220725133258-44543 kubelet[9591]: I0725 20:39:52.986998    9591 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnmjm\" (UniqueName: \"kubernetes.io/projected/d56ebb3b-88b9-4c7b-a6fe-1d145d3d62e9-kube-api-access-fnmjm\") pod \"metrics-server-5c6f97fb75-6tbqr\" (UID: \"d56ebb3b-88b9-4c7b-a6fe-1d145d3d62e9\") " pod="kube-system/metrics-server-5c6f97fb75-6tbqr"
	Jul 25 20:39:52 default-k8s-different-port-20220725133258-44543 kubelet[9591]: I0725 20:39:52.987081    9591 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ab647055-f1f8-4144-a7a2-1d7a7da1e1cf-kube-proxy\") pod \"kube-proxy-pdsqs\" (UID: \"ab647055-f1f8-4144-a7a2-1d7a7da1e1cf\") " pod="kube-system/kube-proxy-pdsqs"
	Jul 25 20:39:52 default-k8s-different-port-20220725133258-44543 kubelet[9591]: I0725 20:39:52.987120    9591 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab647055-f1f8-4144-a7a2-1d7a7da1e1cf-lib-modules\") pod \"kube-proxy-pdsqs\" (UID: \"ab647055-f1f8-4144-a7a2-1d7a7da1e1cf\") " pod="kube-system/kube-proxy-pdsqs"
	Jul 25 20:39:52 default-k8s-different-port-20220725133258-44543 kubelet[9591]: I0725 20:39:52.987157    9591 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ab647055-f1f8-4144-a7a2-1d7a7da1e1cf-xtables-lock\") pod \"kube-proxy-pdsqs\" (UID: \"ab647055-f1f8-4144-a7a2-1d7a7da1e1cf\") " pod="kube-system/kube-proxy-pdsqs"
	Jul 25 20:39:52 default-k8s-different-port-20220725133258-44543 kubelet[9591]: I0725 20:39:52.987177    9591 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfb4d\" (UniqueName: \"kubernetes.io/projected/ee95aea1-d131-4524-a2e1-04d0c4da8e20-kube-api-access-lfb4d\") pod \"coredns-6d4b75cb6d-whj7v\" (UID: \"ee95aea1-d131-4524-a2e1-04d0c4da8e20\") " pod="kube-system/coredns-6d4b75cb6d-whj7v"
	Jul 25 20:39:52 default-k8s-different-port-20220725133258-44543 kubelet[9591]: I0725 20:39:52.987192    9591 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5559b\" (UniqueName: \"kubernetes.io/projected/a99d3e3f-11b6-4b57-9e40-e684accad53d-kube-api-access-5559b\") pod \"storage-provisioner\" (UID: \"a99d3e3f-11b6-4b57-9e40-e684accad53d\") " pod="kube-system/storage-provisioner"
	Jul 25 20:39:52 default-k8s-different-port-20220725133258-44543 kubelet[9591]: I0725 20:39:52.987230    9591 reconciler.go:157] "Reconciler: start to sync state"
	Jul 25 20:39:54 default-k8s-different-port-20220725133258-44543 kubelet[9591]: I0725 20:39:54.117381    9591 request.go:601] Waited for 1.120408457s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8444/api/v1/namespaces/kube-system/pods
	Jul 25 20:39:54 default-k8s-different-port-20220725133258-44543 kubelet[9591]: E0725 20:39:54.168853    9591 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-default-k8s-different-port-20220725133258-44543\" already exists" pod="kube-system/etcd-default-k8s-different-port-20220725133258-44543"
	Jul 25 20:39:54 default-k8s-different-port-20220725133258-44543 kubelet[9591]: E0725 20:39:54.361984    9591 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-default-k8s-different-port-20220725133258-44543\" already exists" pod="kube-system/kube-scheduler-default-k8s-different-port-20220725133258-44543"
	Jul 25 20:39:54 default-k8s-different-port-20220725133258-44543 kubelet[9591]: E0725 20:39:54.521103    9591 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-default-k8s-different-port-20220725133258-44543\" already exists" pod="kube-system/kube-apiserver-default-k8s-different-port-20220725133258-44543"
	Jul 25 20:39:54 default-k8s-different-port-20220725133258-44543 kubelet[9591]: E0725 20:39:54.776661    9591 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-default-k8s-different-port-20220725133258-44543\" already exists" pod="kube-system/kube-controller-manager-default-k8s-different-port-20220725133258-44543"
	Jul 25 20:39:55 default-k8s-different-port-20220725133258-44543 kubelet[9591]: I0725 20:39:55.321076    9591 scope.go:110] "RemoveContainer" containerID="73a466ed4aae161455e2519979dbf041a8fe4764d65b6e5888b5b9d954c98d64"
	Jul 25 20:39:55 default-k8s-different-port-20220725133258-44543 kubelet[9591]: E0725 20:39:55.651036    9591 remote_image.go:218] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jul 25 20:39:55 default-k8s-different-port-20220725133258-44543 kubelet[9591]: E0725 20:39:55.651127    9591 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jul 25 20:39:55 default-k8s-different-port-20220725133258-44543 kubelet[9591]: E0725 20:39:55.651510    9591 kuberuntime_manager.go:905] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-fnmjm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Prob
eHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fil
e,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-5c6f97fb75-6tbqr_kube-system(d56ebb3b-88b9-4c7b-a6fe-1d145d3d62e9): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Jul 25 20:39:55 default-k8s-different-port-20220725133258-44543 kubelet[9591]: E0725 20:39:55.651561    9591 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host\"" pod="kube-system/metrics-server-5c6f97fb75-6tbqr" podUID=d56ebb3b-88b9-4c7b-a6fe-1d145d3d62e9
	Jul 25 20:39:55 default-k8s-different-port-20220725133258-44543 kubelet[9591]: I0725 20:39:55.999445    9591 scope.go:110] "RemoveContainer" containerID="73a466ed4aae161455e2519979dbf041a8fe4764d65b6e5888b5b9d954c98d64"
	Jul 25 20:39:55 default-k8s-different-port-20220725133258-44543 kubelet[9591]: I0725 20:39:55.999771    9591 scope.go:110] "RemoveContainer" containerID="b4aeec6683ebdbfae89eeb7c099569b04320b816bf64672f1820459dbaf1d458"
	Jul 25 20:39:56 default-k8s-different-port-20220725133258-44543 kubelet[9591]: E0725 20:39:55.999958    9591 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-dffd48c4c-nz9jv_kubernetes-dashboard(fc5b8eee-f995-4dd2-9453-12fa2acc79d8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-nz9jv" podUID=fc5b8eee-f995-4dd2-9453-12fa2acc79d8
	
	* 
	* ==> kubernetes-dashboard [1ed4c1abade5] <==
	* 2022/07/25 20:39:04 Using namespace: kubernetes-dashboard
	2022/07/25 20:39:04 Using in-cluster config to connect to apiserver
	2022/07/25 20:39:04 Using secret token for csrf signing
	2022/07/25 20:39:04 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/07/25 20:39:04 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/07/25 20:39:04 Successful initial request to the apiserver, version: v1.24.2
	2022/07/25 20:39:04 Generating JWE encryption key
	2022/07/25 20:39:04 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/07/25 20:39:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/07/25 20:39:04 Initializing JWE encryption key from synchronized object
	2022/07/25 20:39:04 Creating in-cluster Sidecar client
	2022/07/25 20:39:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/07/25 20:39:04 Serving insecurely on HTTP port: 9090
	2022/07/25 20:39:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/07/25 20:39:04 Starting overwatch
	
	* 
	* ==> storage-provisioner [a83ee8e474c1] <==
	* I0725 20:38:57.431178       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0725 20:38:57.448392       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0725 20:38:57.448467       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0725 20:38:57.504818       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0725 20:38:57.505016       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20220725133258-44543_8c774d3f-2c31-4654-9f23-95742b78792a!
	I0725 20:38:57.505686       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6fbd39bb-48a1-4285-acaa-c7fbedcd339f", APIVersion:"v1", ResourceVersion:"416", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-different-port-20220725133258-44543_8c774d3f-2c31-4654-9f23-95742b78792a became leader
	I0725 20:38:57.606298       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20220725133258-44543_8c774d3f-2c31-4654-9f23-95742b78792a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220725133258-44543 -n default-k8s-different-port-20220725133258-44543
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-different-port-20220725133258-44543 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-5c6f97fb75-6tbqr
helpers_test.go:272: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context default-k8s-different-port-20220725133258-44543 describe pod metrics-server-5c6f97fb75-6tbqr

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220725133258-44543 describe pod metrics-server-5c6f97fb75-6tbqr: exit status 1 (281.861424ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5c6f97fb75-6tbqr" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context default-k8s-different-port-20220725133258-44543 describe pod metrics-server-5c6f97fb75-6tbqr: exit status 1
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/Pause (43.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (554.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 13:40:27.428674   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/enable-default-cni-20220725125922-44543/client.crt: no such file or directory
E0725 13:40:29.044560   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/auto-20220725125922-44543/client.crt: no such file or directory
E0725 13:40:29.419648   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/bridge-20220725125922-44543/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 13:41:30.956983   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubenet-20220725125922-44543/client.crt: no such file or directory
E0725 13:41:36.999849   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kindnet-20220725125922-44543/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 13:41:44.509862   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/addons-20220725121918-44543/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 13:43:01.300151   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/functional-20220725122408-44543/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 13:43:12.711133   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/cilium-20220725125923-44543/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 13:43:41.859683   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/default-k8s-different-port-20220725133258-44543/client.crt: no such file or directory
E0725 13:43:41.866155   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/default-k8s-different-port-20220725133258-44543/client.crt: no such file or directory
E0725 13:43:41.877845   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/default-k8s-different-port-20220725133258-44543/client.crt: no such file or directory
E0725 13:43:41.899598   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/default-k8s-different-port-20220725133258-44543/client.crt: no such file or directory
E0725 13:43:41.940862   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/default-k8s-different-port-20220725133258-44543/client.crt: no such file or directory
E0725 13:43:42.023093   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/default-k8s-different-port-20220725133258-44543/client.crt: no such file or directory
E0725 13:43:42.185365   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/default-k8s-different-port-20220725133258-44543/client.crt: no such file or directory
E0725 13:43:42.507559   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/default-k8s-different-port-20220725133258-44543/client.crt: no such file or directory
E0725 13:43:43.148456   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/default-k8s-different-port-20220725133258-44543/client.crt: no such file or directory
E0725 13:43:44.430775   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/default-k8s-different-port-20220725133258-44543/client.crt: no such file or directory
E0725 13:43:46.991538   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/default-k8s-different-port-20220725133258-44543/client.crt: no such file or directory
E0725 13:43:52.111880   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/default-k8s-different-port-20220725133258-44543/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 13:43:56.087184   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/skaffold-20220725125803-44543/client.crt: no such file or directory
E0725 13:44:02.354318   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/default-k8s-different-port-20220725133258-44543/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 13:44:10.481254   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/calico-20220725125923-44543/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 13:44:15.690742   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/no-preload-20220725131741-44543/client.crt: no such file or directory
E0725 13:44:20.684333   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/false-20220725125922-44543/client.crt: no such file or directory
E0725 13:44:22.835366   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/default-k8s-different-port-20220725133258-44543/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 13:45:03.798640   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/default-k8s-different-port-20220725133258-44543/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 13:45:27.439642   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/enable-default-cni-20220725125922-44543/client.crt: no such file or directory
E0725 13:45:29.055391   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/auto-20220725125922-44543/client.crt: no such file or directory
E0725 13:45:29.428789   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/bridge-20220725125922-44543/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0725 13:45:38.743593   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/no-preload-20220725131741-44543/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58937/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0725 13:46:25.723402   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/default-k8s-different-port-20220725133258-44543/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0725 13:46:30.967117   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubenet-20220725125922-44543/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0725 13:46:37.009430   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kindnet-20220725125922-44543/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0725 13:46:38.249339   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/functional-20220725122408-44543/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0725 13:46:44.518770   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/addons-20220725121918-44543/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0725 13:48:12.718209   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/cilium-20220725125923-44543/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0725 13:48:32.139264   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/auto-20220725125922-44543/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0725 13:48:41.869906   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/default-k8s-different-port-20220725133258-44543/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220725131610-44543 -n old-k8s-version-20220725131610-44543
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220725131610-44543 -n old-k8s-version-20220725131610-44543: exit status 2 (447.707689ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-20220725131610-44543" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-20220725131610-44543 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220725131610-44543 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (3.485µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-20220725131610-44543 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220725131610-44543
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220725131610-44543:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6935d4927a39f4ec4845798aff773e95bc8ae838958e9958b965359cf4e99f8c",
	        "Created": "2022-07-25T20:16:17.246440867Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 250323,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-25T20:21:54.974897982Z",
	            "FinishedAt": "2022-07-25T20:21:52.153635121Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/6935d4927a39f4ec4845798aff773e95bc8ae838958e9958b965359cf4e99f8c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6935d4927a39f4ec4845798aff773e95bc8ae838958e9958b965359cf4e99f8c/hostname",
	        "HostsPath": "/var/lib/docker/containers/6935d4927a39f4ec4845798aff773e95bc8ae838958e9958b965359cf4e99f8c/hosts",
	        "LogPath": "/var/lib/docker/containers/6935d4927a39f4ec4845798aff773e95bc8ae838958e9958b965359cf4e99f8c/6935d4927a39f4ec4845798aff773e95bc8ae838958e9958b965359cf4e99f8c-json.log",
	        "Name": "/old-k8s-version-20220725131610-44543",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220725131610-44543:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220725131610-44543",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f15eeb93498eb9b562fa0404f6a7af49ce3b2ca697ba1b4c09698c8f0f5924f2-init/diff:/var/lib/docker/overlay2/d54b85ebfe416686b2558e2807bd5eef62d8c3964c35bbc22b4965093a7c57f4/diff:/var/lib/docker/overlay2/5cb3ff6e7921802d23fd6cba2490df2c36857ad3d97eedcf6e537d1527a37f60/diff:/var/lib/docker/overlay2/e9074b1b2294cc5a00d3237646a71c4d30d89748eadbae26dd532d6a475c9851/diff:/var/lib/docker/overlay2/df8929c4859ecf2c8807040b0174a3f0dfa8da01532ea961cd7f4f2f0ccfbfc2/diff:/var/lib/docker/overlay2/177329168f9a1043ce94a2b7b32958779e8f685e85abf8e4c335d03d23295b26/diff:/var/lib/docker/overlay2/9eef19133d2d5e0111523234481658c1f73d7552db59842ad9ca687debda8fb3/diff:/var/lib/docker/overlay2/5c3a4cdf915d1e65e3003feefb9b9e649b432658e093e04971d5cb79e610dba9/diff:/var/lib/docker/overlay2/d1aeff9fecf983db994270b4acf2a2310c790c64d42c58d436aec3238f88b36c/diff:/var/lib/docker/overlay2/580dc19745fb3c1352983857ac5a555954893045702e7867e651e15a2d6d8732/diff:/var/lib/docker/overlay2/6c77b3
2028ad3ef74c2faf89b5080529c0391907ee9c1d7bbc00c25a5340238b/diff:/var/lib/docker/overlay2/e9e20374d05015c7a4f19eb8e14e663122cd2aef66d761bedfdef138551230b8/diff:/var/lib/docker/overlay2/cd6d64933d189089a7aaf08d7c7db50b89cc981d03b8272e4fdedb65495c3e4c/diff:/var/lib/docker/overlay2/768244a14ae888bb2a08747182035c24bd2a6a7a67f1ddae7b4e60efccb01339/diff:/var/lib/docker/overlay2/f90a95060678580a1a90ee9a563996249f7606bffcd06b680ef15234b56ab4a6/diff:/var/lib/docker/overlay2/49310159b9be5ac9bfa1dca18d816535ceb12e228963adafcf71f9d7d52d95ec/diff:/var/lib/docker/overlay2/cef5c40c08cddf01fe5ea252f4a28586c0bd1296ddc38fc37fc7e34af83c3c30/diff:/var/lib/docker/overlay2/b31bdcb9b1a69c5230400a4e1f5650c1cc8f9e27e673ff43708f9a450106733e/diff:/var/lib/docker/overlay2/899516301b3ffc21aee7cb9984f97d944fe5af0f2cd0bc5a5895bf187e4bff72/diff:/var/lib/docker/overlay2/f1bd0d2fc2ce458c0b6364dde56610c3d26bf8dc2cca2ae4e34a46f173f44595/diff:/var/lib/docker/overlay2/9f0da14f9c19d5a719554996b34f3ef80a0378921181aaf89ca41339ccc5bee3/diff:/var/lib/d
ocker/overlay2/4d6e8c40f7c65cbcfa827b62273eb37d2a0bb0407ee161b80523f11c157a52d4/diff:/var/lib/docker/overlay2/434355bebe182fb5c97b07ad8cf7a59b5474f8bc71379ff4f9690ffe31837e63/diff:/var/lib/docker/overlay2/128fb199261eb4fea9e79111861223ddb0d84438454cb7c0b9a61cc73d0796c3/diff:/var/lib/docker/overlay2/11c9cb7e5f15ac43115cb739da1135a53e08dc94ee1da27441e1d6c79eb73e62/diff:/var/lib/docker/overlay2/5e6bf225209b025da3db5a2d8862c3a0ee64417fa809d026e010644fc6619b48/diff:/var/lib/docker/overlay2/265a2d12f86abb8a607a06ec6f225186dfe622f7e23cbbf146a2ceffc7bc9bdc/diff:/var/lib/docker/overlay2/e9140b8aea381db0faf833cd2d12ff40267febe63f7c6bc2fa27384dbadf89bc/diff:/var/lib/docker/overlay2/07d7d0ab38cb08c95279e0706a65a6a4aeff8b5e72d559f8bc3dfcc0895654d6/diff:/var/lib/docker/overlay2/3a3d8346fb066863283ad39cf7f43615d7a1e99dc12aa7e8892d85d1c550bc63/diff:/var/lib/docker/overlay2/e5317b212815c832ebd2451ddd6e1f6922f350c5c1651ccfef7c5a8f34b7c1e3/diff:/var/lib/docker/overlay2/3bd7d98f0f0bf7ad2e29344a6ab4ca3211616e390da35077e76565c2074
df852/diff:/var/lib/docker/overlay2/ff63d2045cce1bb51b201079bba9d2b2a666b7331699b9407801b2fc00792bb3/diff:/var/lib/docker/overlay2/545bdf3f77f20e3db68875eda0a8829facb605119a630956ab79323688a3562a/diff:/var/lib/docker/overlay2/8f9b40124ec5ada59184358938542e028ca1e234ef1f6bce1816becf6ec8354e/diff:/var/lib/docker/overlay2/71b919cd2ece4207294cb577adab54fdba87cd9f7fdca1a53d6f376f87f11151/diff:/var/lib/docker/overlay2/b2d4a34fe28bd395d9b4ba0120173fc9df90c36a8a60a309ec088eca8930768c/diff:/var/lib/docker/overlay2/e986f180cbc34391c1ea6665283859b4c3c3061156ea1ff7196ffd869741e591/diff:/var/lib/docker/overlay2/cb98df203d42ea558f11fe84300b687cb65e6f3401b377a4466c9c8ef276ec59/diff:/var/lib/docker/overlay2/6371f9c0528fb1fb4a17bfcf3a34877fdd6d43dca1b5f06aff998d43d2010c46/diff:/var/lib/docker/overlay2/79ceef8e89c4094715c05bafb8c1427d09fb4a99f71f1a6186299303af352c98/diff:/var/lib/docker/overlay2/e034e76eeaff4e68d5158ffe54b55d122124ad82998be242fa08bec3010a0e64/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f15eeb93498eb9b562fa0404f6a7af49ce3b2ca697ba1b4c09698c8f0f5924f2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f15eeb93498eb9b562fa0404f6a7af49ce3b2ca697ba1b4c09698c8f0f5924f2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f15eeb93498eb9b562fa0404f6a7af49ce3b2ca697ba1b4c09698c8f0f5924f2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220725131610-44543",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220725131610-44543/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220725131610-44543",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220725131610-44543",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220725131610-44543",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d84a1a595955080b294e46d4c0e514ca16b44447ef22b822c1bc5aa4576d787b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58933"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58934"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58935"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58936"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58937"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d84a1a595955",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220725131610-44543": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6935d4927a39",
	                        "old-k8s-version-20220725131610-44543"
	                    ],
	                    "NetworkID": "c2f2901f9a0d93fa66499c6332491a576318c2a7c67d4d75046d6eea022d9aab",
	                    "EndpointID": "43cf55334515d40188d52abea75fa535d217d7aa8b4c915012814925b60fae46",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220725131610-44543 -n old-k8s-version-20220725131610-44543
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220725131610-44543 -n old-k8s-version-20220725131610-44543: exit status 2 (428.85243ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-20220725131610-44543 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-20220725131610-44543 logs -n 25: (3.497044467s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p                                                         | embed-certs-20220725132539-44543                | jenkins | v1.26.0 | 25 Jul 22 13:32 PDT | 25 Jul 22 13:32 PDT |
	|         | embed-certs-20220725132539-44543                           |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | embed-certs-20220725132539-44543                | jenkins | v1.26.0 | 25 Jul 22 13:32 PDT | 25 Jul 22 13:32 PDT |
	|         | embed-certs-20220725132539-44543                           |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | embed-certs-20220725132539-44543                | jenkins | v1.26.0 | 25 Jul 22 13:32 PDT | 25 Jul 22 13:32 PDT |
	|         | embed-certs-20220725132539-44543                           |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | embed-certs-20220725132539-44543                | jenkins | v1.26.0 | 25 Jul 22 13:32 PDT | 25 Jul 22 13:32 PDT |
	|         | embed-certs-20220725132539-44543                           |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | disable-driver-mounts-20220725133257-44543      | jenkins | v1.26.0 | 25 Jul 22 13:32 PDT | 25 Jul 22 13:32 PDT |
	|         | disable-driver-mounts-20220725133257-44543                 |                                                 |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:32 PDT | 25 Jul 22 13:33 PDT |
	|         | default-k8s-different-port-20220725133258-44543            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:33 PDT | 25 Jul 22 13:33 PDT |
	|         | default-k8s-different-port-20220725133258-44543            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:33 PDT | 25 Jul 22 13:34 PDT |
	|         | default-k8s-different-port-20220725133258-44543            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:34 PDT | 25 Jul 22 13:34 PDT |
	|         | default-k8s-different-port-20220725133258-44543            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:34 PDT | 25 Jul 22 13:39 PDT |
	|         | default-k8s-different-port-20220725133258-44543            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:39 PDT | 25 Jul 22 13:39 PDT |
	|         | default-k8s-different-port-20220725133258-44543            |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:39 PDT | 25 Jul 22 13:39 PDT |
	|         | default-k8s-different-port-20220725133258-44543            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:39 PDT | 25 Jul 22 13:39 PDT |
	|         | default-k8s-different-port-20220725133258-44543            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:40 PDT | 25 Jul 22 13:40 PDT |
	|         | default-k8s-different-port-20220725133258-44543            |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:40 PDT | 25 Jul 22 13:40 PDT |
	|         | default-k8s-different-port-20220725133258-44543            |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220725134004-44543 --memory=2200           | newest-cni-20220725134004-44543                 | jenkins | v1.26.0 | 25 Jul 22 13:40 PDT | 25 Jul 22 13:40 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.24.2              |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220725134004-44543                 | jenkins | v1.26.0 | 25 Jul 22 13:40 PDT | 25 Jul 22 13:40 PDT |
	|         | newest-cni-20220725134004-44543                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | newest-cni-20220725134004-44543                 | jenkins | v1.26.0 | 25 Jul 22 13:40 PDT | 25 Jul 22 13:41 PDT |
	|         | newest-cni-20220725134004-44543                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220725134004-44543                 | jenkins | v1.26.0 | 25 Jul 22 13:41 PDT | 25 Jul 22 13:41 PDT |
	|         | newest-cni-20220725134004-44543                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220725134004-44543 --memory=2200           | newest-cni-20220725134004-44543                 | jenkins | v1.26.0 | 25 Jul 22 13:41 PDT | 25 Jul 22 13:41 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.24.2              |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220725134004-44543                 | jenkins | v1.26.0 | 25 Jul 22 13:41 PDT | 25 Jul 22 13:41 PDT |
	|         | newest-cni-20220725134004-44543                            |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | newest-cni-20220725134004-44543                 | jenkins | v1.26.0 | 25 Jul 22 13:41 PDT | 25 Jul 22 13:41 PDT |
	|         | newest-cni-20220725134004-44543                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | newest-cni-20220725134004-44543                 | jenkins | v1.26.0 | 25 Jul 22 13:41 PDT | 25 Jul 22 13:41 PDT |
	|         | newest-cni-20220725134004-44543                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | newest-cni-20220725134004-44543                 | jenkins | v1.26.0 | 25 Jul 22 13:42 PDT | 25 Jul 22 13:42 PDT |
	|         | newest-cni-20220725134004-44543                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | newest-cni-20220725134004-44543                 | jenkins | v1.26.0 | 25 Jul 22 13:42 PDT | 25 Jul 22 13:42 PDT |
	|         | newest-cni-20220725134004-44543                            |                                                 |         |         |                     |                     |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/25 13:41:01
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 13:41:01.604769   62641 out.go:296] Setting OutFile to fd 1 ...
	I0725 13:41:01.604901   62641 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 13:41:01.604906   62641 out.go:309] Setting ErrFile to fd 2...
	I0725 13:41:01.604910   62641 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 13:41:01.605019   62641 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/bin
	I0725 13:41:01.605467   62641 out.go:303] Setting JSON to false
	I0725 13:41:01.620475   62641 start.go:115] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":16833,"bootTime":1658764828,"procs":356,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0725 13:41:01.620568   62641 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0725 13:41:01.642382   62641 out.go:177] * [newest-cni-20220725134004-44543] minikube v1.26.0 on Darwin 12.4
	I0725 13:41:01.664186   62641 notify.go:193] Checking for updates...
	I0725 13:41:01.685918   62641 out.go:177]   - MINIKUBE_LOCATION=14555
	I0725 13:41:01.707967   62641 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	I0725 13:41:01.729065   62641 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 13:41:01.751081   62641 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 13:41:01.772267   62641 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube
	I0725 13:41:01.794758   62641 config.go:178] Loaded profile config "newest-cni-20220725134004-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0725 13:41:01.795416   62641 driver.go:365] Setting default libvirt URI to qemu:///system
	I0725 13:41:01.864839   62641 docker.go:137] docker version: linux-20.10.17
	I0725 13:41:01.864995   62641 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 13:41:01.998821   62641 info.go:265] docker info: {ID:DEPZ:X5EQ:JUVI:7TAY:IXPP:M3QT:KGJK:NK3N:YSWM:HAHS:YLUF:RBE6 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 20:41:01.926956783 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 13:41:02.020502   62641 out.go:177] * Using the docker driver based on existing profile
	I0725 13:41:02.041400   62641 start.go:284] selected driver: docker
	I0725 13:41:02.041426   62641 start.go:808] validating driver "docker" against &{Name:newest-cni-20220725134004-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:newest-cni-20220725134004-44543 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:tru
e extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 13:41:02.041590   62641 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 13:41:02.044316   62641 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 13:41:02.175804   62641 info.go:265] docker info: {ID:DEPZ:X5EQ:JUVI:7TAY:IXPP:M3QT:KGJK:NK3N:YSWM:HAHS:YLUF:RBE6 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 20:41:02.106306455 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 13:41:02.175969   62641 start_flags.go:872] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0725 13:41:02.175988   62641 cni.go:95] Creating CNI manager for ""
	I0725 13:41:02.175998   62641 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 13:41:02.176021   62641 start_flags.go:310] config:
	{Name:newest-cni-20220725134004-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:newest-cni-20220725134004-44543 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 13:41:02.219416   62641 out.go:177] * Starting control plane node newest-cni-20220725134004-44543 in cluster newest-cni-20220725134004-44543
	I0725 13:41:02.240914   62641 cache.go:120] Beginning downloading kic base image for docker with docker
	I0725 13:41:02.263062   62641 out.go:177] * Pulling base image ...
	I0725 13:41:02.305845   62641 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0725 13:41:02.305897   62641 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0725 13:41:02.305926   62641 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4
	I0725 13:41:02.305951   62641 cache.go:57] Caching tarball of preloaded images
	I0725 13:41:02.306134   62641 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0725 13:41:02.306156   62641 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.2 on docker
	I0725 13:41:02.307199   62641 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/newest-cni-20220725134004-44543/config.json ...
	I0725 13:41:02.370228   62641 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0725 13:41:02.370244   62641 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0725 13:41:02.370255   62641 cache.go:208] Successfully downloaded all kic artifacts
	I0725 13:41:02.370307   62641 start.go:370] acquiring machines lock for newest-cni-20220725134004-44543: {Name:mk938127dcd35e39de5792da4cde1f6031a6baad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 13:41:02.370384   62641 start.go:374] acquired machines lock for "newest-cni-20220725134004-44543" in 57.562µs
	I0725 13:41:02.370403   62641 start.go:95] Skipping create...Using existing machine configuration
	I0725 13:41:02.370414   62641 fix.go:55] fixHost starting: 
	I0725 13:41:02.370643   62641 cli_runner.go:164] Run: docker container inspect newest-cni-20220725134004-44543 --format={{.State.Status}}
	I0725 13:41:02.437623   62641 fix.go:103] recreateIfNeeded on newest-cni-20220725134004-44543: state=Stopped err=<nil>
	W0725 13:41:02.437661   62641 fix.go:129] unexpected machine state, will restart: <nil>
	I0725 13:41:02.481261   62641 out.go:177] * Restarting existing docker container for "newest-cni-20220725134004-44543" ...
	I0725 13:41:02.502954   62641 cli_runner.go:164] Run: docker start newest-cni-20220725134004-44543
	I0725 13:41:02.847905   62641 cli_runner.go:164] Run: docker container inspect newest-cni-20220725134004-44543 --format={{.State.Status}}
	I0725 13:41:02.921451   62641 kic.go:415] container "newest-cni-20220725134004-44543" state is running.
	I0725 13:41:02.922010   62641 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220725134004-44543
	I0725 13:41:02.999381   62641 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/newest-cni-20220725134004-44543/config.json ...
	I0725 13:41:02.999787   62641 machine.go:88] provisioning docker machine ...
	I0725 13:41:02.999811   62641 ubuntu.go:169] provisioning hostname "newest-cni-20220725134004-44543"
	I0725 13:41:02.999890   62641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725134004-44543
	I0725 13:41:03.075991   62641 main.go:134] libmachine: Using SSH client type: native
	I0725 13:41:03.076182   62641 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 61130 <nil> <nil>}
	I0725 13:41:03.076197   62641 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220725134004-44543 && echo "newest-cni-20220725134004-44543" | sudo tee /etc/hostname
	I0725 13:41:03.206919   62641 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-20220725134004-44543
	
	I0725 13:41:03.206994   62641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725134004-44543
	I0725 13:41:03.283133   62641 main.go:134] libmachine: Using SSH client type: native
	I0725 13:41:03.283305   62641 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 61130 <nil> <nil>}
	I0725 13:41:03.283326   62641 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220725134004-44543' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220725134004-44543/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220725134004-44543' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 13:41:03.405138   62641 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 13:41:03.405172   62641 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube}
	I0725 13:41:03.405208   62641 ubuntu.go:177] setting up certificates
	I0725 13:41:03.405220   62641 provision.go:83] configureAuth start
	I0725 13:41:03.405313   62641 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220725134004-44543
	I0725 13:41:03.482288   62641 provision.go:138] copyHostCerts
	I0725 13:41:03.482371   62641 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem, removing ...
	I0725 13:41:03.482380   62641 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem
	I0725 13:41:03.482467   62641 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem (1082 bytes)
	I0725 13:41:03.482700   62641 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem, removing ...
	I0725 13:41:03.482708   62641 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem
	I0725 13:41:03.482765   62641 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem (1123 bytes)
	I0725 13:41:03.482898   62641 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem, removing ...
	I0725 13:41:03.482906   62641 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem
	I0725 13:41:03.482968   62641 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem (1675 bytes)
	I0725 13:41:03.483101   62641 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20220725134004-44543 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220725134004-44543]
	I0725 13:41:03.601079   62641 provision.go:172] copyRemoteCerts
	I0725 13:41:03.601148   62641 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 13:41:03.601194   62641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725134004-44543
	I0725 13:41:03.676833   62641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61130 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/newest-cni-20220725134004-44543/id_rsa Username:docker}
	I0725 13:41:03.765491   62641 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0725 13:41:03.782662   62641 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0725 13:41:03.799217   62641 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0725 13:41:03.815241   62641 provision.go:86] duration metric: configureAuth took 409.996098ms
	I0725 13:41:03.815253   62641 ubuntu.go:193] setting minikube options for container-runtime
	I0725 13:41:03.815429   62641 config.go:178] Loaded profile config "newest-cni-20220725134004-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0725 13:41:03.815493   62641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725134004-44543
	I0725 13:41:03.886548   62641 main.go:134] libmachine: Using SSH client type: native
	I0725 13:41:03.886702   62641 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 61130 <nil> <nil>}
	I0725 13:41:03.886713   62641 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 13:41:04.008823   62641 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0725 13:41:04.008850   62641 ubuntu.go:71] root file system type: overlay
	I0725 13:41:04.008999   62641 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 13:41:04.009069   62641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725134004-44543
	I0725 13:41:04.080213   62641 main.go:134] libmachine: Using SSH client type: native
	I0725 13:41:04.080363   62641 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 61130 <nil> <nil>}
	I0725 13:41:04.080412   62641 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 13:41:04.215381   62641 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 13:41:04.215466   62641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725134004-44543
	I0725 13:41:04.287477   62641 main.go:134] libmachine: Using SSH client type: native
	I0725 13:41:04.287633   62641 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 61130 <nil> <nil>}
	I0725 13:41:04.287646   62641 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 13:41:04.413431   62641 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 13:41:04.413445   62641 machine.go:91] provisioned docker machine in 1.413607753s
	I0725 13:41:04.413455   62641 start.go:307] post-start starting for "newest-cni-20220725134004-44543" (driver="docker")
	I0725 13:41:04.413460   62641 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 13:41:04.413520   62641 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 13:41:04.413564   62641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725134004-44543
	I0725 13:41:04.483819   62641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61130 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/newest-cni-20220725134004-44543/id_rsa Username:docker}
	I0725 13:41:04.573866   62641 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 13:41:04.577512   62641 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0725 13:41:04.577532   62641 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0725 13:41:04.577543   62641 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0725 13:41:04.577548   62641 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0725 13:41:04.577559   62641 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/addons for local assets ...
	I0725 13:41:04.577671   62641 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files for local assets ...
	I0725 13:41:04.577813   62641 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem -> 445432.pem in /etc/ssl/certs
	I0725 13:41:04.577999   62641 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 13:41:04.585135   62641 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem --> /etc/ssl/certs/445432.pem (1708 bytes)
	I0725 13:41:04.601851   62641 start.go:310] post-start completed in 188.379013ms
	I0725 13:41:04.601920   62641 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 13:41:04.601965   62641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725134004-44543
	I0725 13:41:04.671831   62641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61130 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/newest-cni-20220725134004-44543/id_rsa Username:docker}
	I0725 13:41:04.762233   62641 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0725 13:41:04.766670   62641 fix.go:57] fixHost completed within 2.396186108s
	I0725 13:41:04.766683   62641 start.go:82] releasing machines lock for "newest-cni-20220725134004-44543", held for 2.396220574s
	I0725 13:41:04.766757   62641 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220725134004-44543
	I0725 13:41:04.837640   62641 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0725 13:41:04.837643   62641 ssh_runner.go:195] Run: systemctl --version
	I0725 13:41:04.837724   62641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725134004-44543
	I0725 13:41:04.837723   62641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725134004-44543
	I0725 13:41:04.913021   62641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61130 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/newest-cni-20220725134004-44543/id_rsa Username:docker}
	I0725 13:41:04.915823   62641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61130 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/newest-cni-20220725134004-44543/id_rsa Username:docker}
	I0725 13:41:04.996139   62641 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0725 13:41:05.213499   62641 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (233 bytes)
	I0725 13:41:05.226262   62641 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 13:41:05.290328   62641 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0725 13:41:05.366103   62641 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 13:41:05.375827   62641 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0725 13:41:05.375888   62641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 13:41:05.385531   62641 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 13:41:05.398027   62641 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 13:41:05.465167   62641 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 13:41:05.534527   62641 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 13:41:05.599370   62641 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 13:41:05.829754   62641 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0725 13:41:05.899999   62641 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 13:41:05.969149   62641 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0725 13:41:05.978813   62641 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0725 13:41:05.978890   62641 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0725 13:41:05.982938   62641 start.go:471] Will wait 60s for crictl version
	I0725 13:41:05.982986   62641 ssh_runner.go:195] Run: sudo crictl version
	I0725 13:41:06.011832   62641 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0725 13:41:06.011903   62641 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 13:41:06.045117   62641 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 13:41:06.102152   62641 out.go:204] * Preparing Kubernetes v1.24.2 on Docker 20.10.17 ...
	I0725 13:41:06.102359   62641 cli_runner.go:164] Run: docker exec -t newest-cni-20220725134004-44543 dig +short host.docker.internal
	I0725 13:41:06.235422   62641 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0725 13:41:06.235509   62641 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0725 13:41:06.239630   62641 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 13:41:06.249184   62641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220725134004-44543
	I0725 13:41:06.343694   62641 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0725 13:41:06.365259   62641 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0725 13:41:06.365400   62641 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 13:41:06.396834   62641 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.2
	k8s.gcr.io/kube-scheduler:v1.24.2
	k8s.gcr.io/kube-proxy:v1.24.2
	k8s.gcr.io/kube-controller-manager:v1.24.2
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0725 13:41:06.396848   62641 docker.go:542] Images already preloaded, skipping extraction
	I0725 13:41:06.396922   62641 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 13:41:06.425279   62641 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.2
	k8s.gcr.io/kube-scheduler:v1.24.2
	k8s.gcr.io/kube-proxy:v1.24.2
	k8s.gcr.io/kube-controller-manager:v1.24.2
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0725 13:41:06.425301   62641 cache_images.go:84] Images are preloaded, skipping loading
	I0725 13:41:06.425374   62641 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0725 13:41:06.500080   62641 cni.go:95] Creating CNI manager for ""
	I0725 13:41:06.500094   62641 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 13:41:06.500110   62641 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0725 13:41:06.500125   62641 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20220725134004-44543 NodeName:newest-cni-20220725134004-44543 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:fal
se] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0725 13:41:06.500228   62641 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "newest-cni-20220725134004-44543"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 13:41:06.500302   62641 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20220725134004-44543 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.2 ClusterName:newest-cni-20220725134004-44543 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0725 13:41:06.500378   62641 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
	I0725 13:41:06.508168   62641 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 13:41:06.508230   62641 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 13:41:06.516145   62641 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (530 bytes)
	I0725 13:41:06.529064   62641 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 13:41:06.542568   62641 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2189 bytes)
	I0725 13:41:06.555691   62641 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0725 13:41:06.559354   62641 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 13:41:06.568796   62641 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/newest-cni-20220725134004-44543 for IP: 192.168.76.2
	I0725 13:41:06.568905   62641 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.key
	I0725 13:41:06.568957   62641 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.key
	I0725 13:41:06.569029   62641 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/newest-cni-20220725134004-44543/client.key
	I0725 13:41:06.569092   62641 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/newest-cni-20220725134004-44543/apiserver.key.31bdca25
	I0725 13:41:06.569142   62641 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/newest-cni-20220725134004-44543/proxy-client.key
	I0725 13:41:06.569349   62641 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543.pem (1338 bytes)
	W0725 13:41:06.569386   62641 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543_empty.pem, impossibly tiny 0 bytes
	I0725 13:41:06.569402   62641 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 13:41:06.569437   62641 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem (1082 bytes)
	I0725 13:41:06.569468   62641 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem (1123 bytes)
	I0725 13:41:06.569497   62641 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem (1675 bytes)
	I0725 13:41:06.569555   62641 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem (1708 bytes)
	I0725 13:41:06.570104   62641 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/newest-cni-20220725134004-44543/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0725 13:41:06.586702   62641 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/newest-cni-20220725134004-44543/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 13:41:06.602950   62641 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/newest-cni-20220725134004-44543/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 13:41:06.619736   62641 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/newest-cni-20220725134004-44543/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 13:41:06.642804   62641 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 13:41:06.658996   62641 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 13:41:06.675332   62641 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 13:41:06.691752   62641 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0725 13:41:06.708295   62641 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543.pem --> /usr/share/ca-certificates/44543.pem (1338 bytes)
	I0725 13:41:06.724751   62641 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem --> /usr/share/ca-certificates/445432.pem (1708 bytes)
	I0725 13:41:06.741061   62641 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 13:41:06.757627   62641 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 13:41:06.770205   62641 ssh_runner.go:195] Run: openssl version
	I0725 13:41:06.775104   62641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/445432.pem && ln -fs /usr/share/ca-certificates/445432.pem /etc/ssl/certs/445432.pem"
	I0725 13:41:06.782798   62641 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/445432.pem
	I0725 13:41:06.786665   62641 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 25 19:24 /usr/share/ca-certificates/445432.pem
	I0725 13:41:06.786710   62641 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/445432.pem
	I0725 13:41:06.791947   62641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/445432.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 13:41:06.798822   62641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 13:41:06.806371   62641 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 13:41:06.810124   62641 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 25 19:19 /usr/share/ca-certificates/minikubeCA.pem
	I0725 13:41:06.810165   62641 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 13:41:06.815270   62641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 13:41:06.822348   62641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44543.pem && ln -fs /usr/share/ca-certificates/44543.pem /etc/ssl/certs/44543.pem"
	I0725 13:41:06.829900   62641 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44543.pem
	I0725 13:41:06.833621   62641 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 25 19:24 /usr/share/ca-certificates/44543.pem
	I0725 13:41:06.833658   62641 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44543.pem
	I0725 13:41:06.838937   62641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/44543.pem /etc/ssl/certs/51391683.0"
	I0725 13:41:06.845919   62641 kubeadm.go:395] StartCluster: {Name:newest-cni-20220725134004-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:newest-cni-20220725134004-44543 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubele
t:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 13:41:06.846015   62641 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 13:41:06.874420   62641 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 13:41:06.882063   62641 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0725 13:41:06.882077   62641 kubeadm.go:626] restartCluster start
	I0725 13:41:06.882121   62641 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 13:41:06.888659   62641 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:06.888716   62641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220725134004-44543
	I0725 13:41:06.960144   62641 kubeconfig.go:116] verify returned: extract IP: "newest-cni-20220725134004-44543" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	I0725 13:41:06.960306   62641 kubeconfig.go:127] "newest-cni-20220725134004-44543" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig - will repair!
	I0725 13:41:06.960639   62641 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig: {Name:mk33e723b045d7cbb2c524ed31a7e78a4a0b6415 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 13:41:06.961783   62641 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 13:41:06.969405   62641 api_server.go:165] Checking apiserver status ...
	I0725 13:41:06.969452   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:41:06.977546   62641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:07.177691   62641 api_server.go:165] Checking apiserver status ...
	I0725 13:41:07.177841   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:41:07.188294   62641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:07.379725   62641 api_server.go:165] Checking apiserver status ...
	I0725 13:41:07.379891   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:41:07.390695   62641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:07.578354   62641 api_server.go:165] Checking apiserver status ...
	I0725 13:41:07.578503   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:41:07.589390   62641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:07.778053   62641 api_server.go:165] Checking apiserver status ...
	I0725 13:41:07.778208   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:41:07.788555   62641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:07.978483   62641 api_server.go:165] Checking apiserver status ...
	I0725 13:41:07.978583   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:41:07.987167   62641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:08.179736   62641 api_server.go:165] Checking apiserver status ...
	I0725 13:41:08.179873   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:41:08.190170   62641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:08.379744   62641 api_server.go:165] Checking apiserver status ...
	I0725 13:41:08.379930   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:41:08.390692   62641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:08.579742   62641 api_server.go:165] Checking apiserver status ...
	I0725 13:41:08.579884   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:41:08.590517   62641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:08.778483   62641 api_server.go:165] Checking apiserver status ...
	I0725 13:41:08.778568   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:41:08.787637   62641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:08.978431   62641 api_server.go:165] Checking apiserver status ...
	I0725 13:41:08.978579   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:41:08.988922   62641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:09.179779   62641 api_server.go:165] Checking apiserver status ...
	I0725 13:41:09.179927   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:41:09.191022   62641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:09.378051   62641 api_server.go:165] Checking apiserver status ...
	I0725 13:41:09.378196   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:41:09.388421   62641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:09.577880   62641 api_server.go:165] Checking apiserver status ...
	I0725 13:41:09.578004   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:41:09.588394   62641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:09.777861   62641 api_server.go:165] Checking apiserver status ...
	I0725 13:41:09.778000   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:41:09.788393   62641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:09.979800   62641 api_server.go:165] Checking apiserver status ...
	I0725 13:41:09.979933   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:41:09.990537   62641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:09.990547   62641 api_server.go:165] Checking apiserver status ...
	I0725 13:41:09.990588   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:41:09.998309   62641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:09.998319   62641 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0725 13:41:09.998326   62641 kubeadm.go:1092] stopping kube-system containers ...
	I0725 13:41:09.998375   62641 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 13:41:10.028922   62641 docker.go:443] Stopping containers: [2e12297de868 a9ff123b784e 2b1a2f7d82bf 226f6c6e0075 b9f0d14e47ae c538fc012392 2322278f6a58 775d14994103 433c6e459edd 17bf2e8fefb4 c5c26c8d67b9 457e1a17e981 50acabee2bf5 b7b19f1a5a2d 662ac0f86cb7 f89737e8ee78]
	I0725 13:41:10.028995   62641 ssh_runner.go:195] Run: docker stop 2e12297de868 a9ff123b784e 2b1a2f7d82bf 226f6c6e0075 b9f0d14e47ae c538fc012392 2322278f6a58 775d14994103 433c6e459edd 17bf2e8fefb4 c5c26c8d67b9 457e1a17e981 50acabee2bf5 b7b19f1a5a2d 662ac0f86cb7 f89737e8ee78
	I0725 13:41:10.059285   62641 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 13:41:10.069060   62641 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 13:41:10.076097   62641 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jul 25 20:40 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jul 25 20:40 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Jul 25 20:40 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jul 25 20:40 /etc/kubernetes/scheduler.conf
	
	I0725 13:41:10.076143   62641 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 13:41:10.082864   62641 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 13:41:10.089720   62641 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 13:41:10.096488   62641 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:10.096531   62641 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 13:41:10.103289   62641 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 13:41:10.110095   62641 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:10.110147   62641 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 13:41:10.116727   62641 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 13:41:10.123750   62641 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0725 13:41:10.123759   62641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:41:10.167626   62641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:41:11.147058   62641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:41:11.324377   62641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:41:11.369837   62641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:41:11.433701   62641 api_server.go:51] waiting for apiserver process to appear ...
	I0725 13:41:11.433762   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:41:11.946640   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:41:12.445830   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:41:12.944870   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:41:12.956307   62641 api_server.go:71] duration metric: took 1.522564033s to wait for apiserver process to appear ...
	I0725 13:41:12.956326   62641 api_server.go:87] waiting for apiserver healthz status ...
	I0725 13:41:12.956340   62641 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61134/healthz ...
	I0725 13:41:15.637409   62641 api_server.go:266] https://127.0.0.1:61134/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0725 13:41:15.637424   62641 api_server.go:102] status: https://127.0.0.1:61134/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0725 13:41:16.138026   62641 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61134/healthz ...
	I0725 13:41:16.146708   62641 api_server.go:266] https://127.0.0.1:61134/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 13:41:16.146728   62641 api_server.go:102] status: https://127.0.0.1:61134/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 13:41:16.637608   62641 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61134/healthz ...
	I0725 13:41:16.642891   62641 api_server.go:266] https://127.0.0.1:61134/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 13:41:16.642907   62641 api_server.go:102] status: https://127.0.0.1:61134/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 13:41:17.138172   62641 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61134/healthz ...
	I0725 13:41:17.144715   62641 api_server.go:266] https://127.0.0.1:61134/healthz returned 200:
	ok
	I0725 13:41:17.151405   62641 api_server.go:140] control plane version: v1.24.2
	I0725 13:41:17.151419   62641 api_server.go:130] duration metric: took 4.194962023s to wait for apiserver health ...
	I0725 13:41:17.151425   62641 cni.go:95] Creating CNI manager for ""
	I0725 13:41:17.151435   62641 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 13:41:17.151449   62641 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 13:41:17.160753   62641 system_pods.go:59] 9 kube-system pods found
	I0725 13:41:17.160773   62641 system_pods.go:61] "coredns-6d4b75cb6d-hbn6k" [fc055ddb-e646-4d07-b88b-583f467837dd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 13:41:17.160779   62641 system_pods.go:61] "coredns-6d4b75cb6d-x9jqp" [fa2d6b5c-bb0c-4fc9-9443-d933aed66032] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 13:41:17.160785   62641 system_pods.go:61] "etcd-newest-cni-20220725134004-44543" [fadf380e-3d9e-49f1-b1c3-2802743dcb63] Running
	I0725 13:41:17.160789   62641 system_pods.go:61] "kube-apiserver-newest-cni-20220725134004-44543" [4c112752-e4f6-477a-9489-ff1a7b1a92e3] Running
	I0725 13:41:17.160796   62641 system_pods.go:61] "kube-controller-manager-newest-cni-20220725134004-44543" [8a1cdd56-6f7e-4d69-ae6e-8260c02c5acc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0725 13:41:17.160802   62641 system_pods.go:61] "kube-proxy-mm6ph" [820f3eb0-aba6-415b-a884-b67741ece355] Running
	I0725 13:41:17.160807   62641 system_pods.go:61] "kube-scheduler-newest-cni-20220725134004-44543" [06d1a7e9-4b67-47fa-b62c-eebb4e5067fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 13:41:17.160812   62641 system_pods.go:61] "metrics-server-5c6f97fb75-92v57" [32a2b35f-ed5d-4e41-9f58-4b5e119d8bb3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 13:41:17.160817   62641 system_pods.go:61] "storage-provisioner" [d910dec0-c09f-4225-810a-5a5d773f923b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0725 13:41:17.160820   62641 system_pods.go:74] duration metric: took 9.367171ms to wait for pod list to return data ...
	I0725 13:41:17.160826   62641 node_conditions.go:102] verifying NodePressure condition ...
	I0725 13:41:17.164505   62641 node_conditions.go:122] node storage ephemeral capacity is 107077304Ki
	I0725 13:41:17.164523   62641 node_conditions.go:123] node cpu capacity is 6
	I0725 13:41:17.164532   62641 node_conditions.go:105] duration metric: took 3.701498ms to run NodePressure ...
	I0725 13:41:17.164543   62641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:41:17.332026   62641 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 13:41:17.339854   62641 ops.go:34] apiserver oom_adj: -16
	I0725 13:41:17.339874   62641 kubeadm.go:630] restartCluster took 10.457469712s
	I0725 13:41:17.339885   62641 kubeadm.go:397] StartCluster complete in 10.493656318s
	I0725 13:41:17.339902   62641 settings.go:142] acquiring lock: {Name:mk9b5e011e5806157f9a122f8c65a6cc16a3d9e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 13:41:17.339979   62641 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	I0725 13:41:17.340560   62641 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig: {Name:mk33e723b045d7cbb2c524ed31a7e78a4a0b6415 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 13:41:17.343616   62641 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220725134004-44543" rescaled to 1
	I0725 13:41:17.343649   62641 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 13:41:17.343669   62641 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0725 13:41:17.343676   62641 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0725 13:41:17.343824   62641 config.go:178] Loaded profile config "newest-cni-20220725134004-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0725 13:41:17.367695   62641 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220725134004-44543"
	I0725 13:41:17.367523   62641 out.go:177] * Verifying Kubernetes components...
	I0725 13:41:17.367695   62641 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220725134004-44543"
	I0725 13:41:17.367750   62641 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220725134004-44543"
	I0725 13:41:17.367698   62641 addons.go:65] Setting dashboard=true in profile "newest-cni-20220725134004-44543"
	I0725 13:41:17.367839   62641 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220725134004-44543"
	I0725 13:41:17.426431   62641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 13:41:17.426434   62641 addons.go:153] Setting addon dashboard=true in "newest-cni-20220725134004-44543"
	W0725 13:41:17.426475   62641 addons.go:162] addon dashboard should already be in state true
	W0725 13:41:17.426482   62641 addons.go:162] addon storage-provisioner should already be in state true
	I0725 13:41:17.367832   62641 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220725134004-44543"
	I0725 13:41:17.368425   62641 cli_runner.go:164] Run: docker container inspect newest-cni-20220725134004-44543 --format={{.State.Status}}
	I0725 13:41:17.426544   62641 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220725134004-44543"
	I0725 13:41:17.426562   62641 host.go:66] Checking if "newest-cni-20220725134004-44543" exists ...
	W0725 13:41:17.426582   62641 addons.go:162] addon metrics-server should already be in state true
	I0725 13:41:17.426595   62641 host.go:66] Checking if "newest-cni-20220725134004-44543" exists ...
	I0725 13:41:17.426646   62641 host.go:66] Checking if "newest-cni-20220725134004-44543" exists ...
	I0725 13:41:17.430002   62641 cli_runner.go:164] Run: docker container inspect newest-cni-20220725134004-44543 --format={{.State.Status}}
	I0725 13:41:17.430013   62641 cli_runner.go:164] Run: docker container inspect newest-cni-20220725134004-44543 --format={{.State.Status}}
	I0725 13:41:17.430127   62641 cli_runner.go:164] Run: docker container inspect newest-cni-20220725134004-44543 --format={{.State.Status}}
	I0725 13:41:17.441775   62641 start.go:789] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0725 13:41:17.455220   62641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220725134004-44543
	I0725 13:41:17.619130   62641 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0725 13:41:17.564243   62641 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220725134004-44543"
	I0725 13:41:17.578481   62641 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 13:41:17.598496   62641 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0725 13:41:17.600117   62641 api_server.go:51] waiting for apiserver process to appear ...
	W0725 13:41:17.619193   62641 addons.go:162] addon default-storageclass should already be in state true
	I0725 13:41:17.640288   62641 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0725 13:41:17.640305   62641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0725 13:41:17.619216   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:41:17.640332   62641 host.go:66] Checking if "newest-cni-20220725134004-44543" exists ...
	I0725 13:41:17.677208   62641 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0725 13:41:17.640379   62641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725134004-44543
	I0725 13:41:17.640398   62641 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 13:41:17.640698   62641 cli_runner.go:164] Run: docker container inspect newest-cni-20220725134004-44543 --format={{.State.Status}}
	I0725 13:41:17.654154   62641 api_server.go:71] duration metric: took 310.477127ms to wait for apiserver process to appear ...
	I0725 13:41:17.714129   62641 api_server.go:87] waiting for apiserver healthz status ...
	I0725 13:41:17.714136   62641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 13:41:17.714143   62641 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61134/healthz ...
	I0725 13:41:17.714177   62641 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0725 13:41:17.714187   62641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0725 13:41:17.714219   62641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725134004-44543
	I0725 13:41:17.714247   62641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725134004-44543
	I0725 13:41:17.726411   62641 api_server.go:266] https://127.0.0.1:61134/healthz returned 200:
	ok
	I0725 13:41:17.729015   62641 api_server.go:140] control plane version: v1.24.2
	I0725 13:41:17.729043   62641 api_server.go:130] duration metric: took 14.904203ms to wait for apiserver health ...
	I0725 13:41:17.729051   62641 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 13:41:17.737689   62641 system_pods.go:59] 9 kube-system pods found
	I0725 13:41:17.737718   62641 system_pods.go:61] "coredns-6d4b75cb6d-hbn6k" [fc055ddb-e646-4d07-b88b-583f467837dd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 13:41:17.737728   62641 system_pods.go:61] "coredns-6d4b75cb6d-x9jqp" [fa2d6b5c-bb0c-4fc9-9443-d933aed66032] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 13:41:17.737735   62641 system_pods.go:61] "etcd-newest-cni-20220725134004-44543" [fadf380e-3d9e-49f1-b1c3-2802743dcb63] Running
	I0725 13:41:17.737742   62641 system_pods.go:61] "kube-apiserver-newest-cni-20220725134004-44543" [4c112752-e4f6-477a-9489-ff1a7b1a92e3] Running
	I0725 13:41:17.737755   62641 system_pods.go:61] "kube-controller-manager-newest-cni-20220725134004-44543" [8a1cdd56-6f7e-4d69-ae6e-8260c02c5acc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0725 13:41:17.737761   62641 system_pods.go:61] "kube-proxy-mm6ph" [820f3eb0-aba6-415b-a884-b67741ece355] Running
	I0725 13:41:17.737776   62641 system_pods.go:61] "kube-scheduler-newest-cni-20220725134004-44543" [06d1a7e9-4b67-47fa-b62c-eebb4e5067fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 13:41:17.737788   62641 system_pods.go:61] "metrics-server-5c6f97fb75-92v57" [32a2b35f-ed5d-4e41-9f58-4b5e119d8bb3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 13:41:17.737799   62641 system_pods.go:61] "storage-provisioner" [d910dec0-c09f-4225-810a-5a5d773f923b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0725 13:41:17.737808   62641 system_pods.go:74] duration metric: took 8.751882ms to wait for pod list to return data ...
	I0725 13:41:17.737815   62641 default_sa.go:34] waiting for default service account to be created ...
	I0725 13:41:17.741962   62641 default_sa.go:45] found service account: "default"
	I0725 13:41:17.741978   62641 default_sa.go:55] duration metric: took 4.138262ms for default service account to be created ...
	I0725 13:41:17.741988   62641 kubeadm.go:572] duration metric: took 398.309938ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0725 13:41:17.742005   62641 node_conditions.go:102] verifying NodePressure condition ...
	I0725 13:41:17.747495   62641 node_conditions.go:122] node storage ephemeral capacity is 107077304Ki
	I0725 13:41:17.747515   62641 node_conditions.go:123] node cpu capacity is 6
	I0725 13:41:17.747525   62641 node_conditions.go:105] duration metric: took 5.502342ms to run NodePressure ...
	I0725 13:41:17.747537   62641 start.go:216] waiting for startup goroutines ...
	I0725 13:41:17.830346   62641 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 13:41:17.830368   62641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 13:41:17.830438   62641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725134004-44543
	I0725 13:41:17.845326   62641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61130 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/newest-cni-20220725134004-44543/id_rsa Username:docker}
	I0725 13:41:17.847941   62641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61130 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/newest-cni-20220725134004-44543/id_rsa Username:docker}
	I0725 13:41:17.850143   62641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61130 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/newest-cni-20220725134004-44543/id_rsa Username:docker}
	I0725 13:41:17.916591   62641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61130 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/newest-cni-20220725134004-44543/id_rsa Username:docker}
	I0725 13:41:18.017591   62641 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0725 13:41:18.017604   62641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0725 13:41:18.021389   62641 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0725 13:41:18.021403   62641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0725 13:41:18.023473   62641 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 13:41:18.040806   62641 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0725 13:41:18.040824   62641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0725 13:41:18.105407   62641 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0725 13:41:18.105424   62641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0725 13:41:18.113954   62641 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 13:41:18.113970   62641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0725 13:41:18.120629   62641 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 13:41:18.126078   62641 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0725 13:41:18.126090   62641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0725 13:41:18.145916   62641 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0725 13:41:18.145948   62641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0725 13:41:18.207466   62641 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 13:41:18.235227   62641 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0725 13:41:18.235244   62641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0725 13:41:18.338414   62641 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0725 13:41:18.338432   62641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0725 13:41:18.433557   62641 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0725 13:41:18.433577   62641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0725 13:41:18.453890   62641 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0725 13:41:18.453908   62641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0725 13:41:18.515844   62641 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0725 13:41:18.515863   62641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0725 13:41:18.541656   62641 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0725 13:41:19.239622   62641 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.216077847s)
	I0725 13:41:19.239671   62641 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.118958728s)
	I0725 13:41:19.239723   62641 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.032203727s)
	I0725 13:41:19.239744   62641 addons.go:383] Verifying addon metrics-server=true in "newest-cni-20220725134004-44543"
	I0725 13:41:19.449476   62641 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0725 13:41:19.491637   62641 addons.go:414] enableAddons completed in 2.147902561s
	I0725 13:41:19.530210   62641 start.go:506] kubectl: 1.24.1, cluster: 1.24.2 (minor skew: 0)
	I0725 13:41:19.558148   62641 out.go:177] * Done! kubectl is now configured to use "newest-cni-20220725134004-44543" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Mon 2022-07-25 20:21:55 UTC, end at Mon 2022-07-25 20:48:52 UTC. --
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[129]: time="2022-07-25T20:21:57.451853324Z" level=info msg="Processing signal 'terminated'"
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[129]: time="2022-07-25T20:21:57.452788005Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[129]: time="2022-07-25T20:21:57.453258112Z" level=info msg="Daemon shutdown complete"
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[129]: time="2022-07-25T20:21:57.453320986Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 systemd[1]: docker.service: Succeeded.
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 systemd[1]: Stopped Docker Application Container Engine.
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 systemd[1]: Starting Docker Application Container Engine...
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[424]: time="2022-07-25T20:21:57.506263841Z" level=info msg="Starting up"
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[424]: time="2022-07-25T20:21:57.508857550Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[424]: time="2022-07-25T20:21:57.508891909Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[424]: time="2022-07-25T20:21:57.508909432Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[424]: time="2022-07-25T20:21:57.508917186Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[424]: time="2022-07-25T20:21:57.509870019Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[424]: time="2022-07-25T20:21:57.509899398Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[424]: time="2022-07-25T20:21:57.509912393Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[424]: time="2022-07-25T20:21:57.509918763Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[424]: time="2022-07-25T20:21:57.513919873Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[424]: time="2022-07-25T20:21:57.517902418Z" level=info msg="Loading containers: start."
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[424]: time="2022-07-25T20:21:57.592180966Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[424]: time="2022-07-25T20:21:57.621348334Z" level=info msg="Loading containers: done."
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[424]: time="2022-07-25T20:21:57.629449532Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[424]: time="2022-07-25T20:21:57.629505415Z" level=info msg="Daemon has completed initialization"
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 systemd[1]: Started Docker Application Container Engine.
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[424]: time="2022-07-25T20:21:57.651604471Z" level=info msg="API listen on [::]:2376"
	Jul 25 20:21:57 old-k8s-version-20220725131610-44543 dockerd[424]: time="2022-07-25T20:21:57.655414726Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* time="2022-07-25T20:48:54Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  20:48:54 up  1:30,  0 users,  load average: 0.08, 0.36, 0.76
	Linux old-k8s-version-20220725131610-44543 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2022-07-25 20:21:55 UTC, end at Mon 2022-07-25 20:48:54 UTC. --
	Jul 25 20:48:52 old-k8s-version-20220725131610-44543 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 25 20:48:53 old-k8s-version-20220725131610-44543 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1670.
	Jul 25 20:48:53 old-k8s-version-20220725131610-44543 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 25 20:48:53 old-k8s-version-20220725131610-44543 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 25 20:48:53 old-k8s-version-20220725131610-44543 kubelet[34154]: I0725 20:48:53.495515   34154 server.go:410] Version: v1.16.0
	Jul 25 20:48:53 old-k8s-version-20220725131610-44543 kubelet[34154]: I0725 20:48:53.495711   34154 plugins.go:100] No cloud provider specified.
	Jul 25 20:48:53 old-k8s-version-20220725131610-44543 kubelet[34154]: I0725 20:48:53.495744   34154 server.go:773] Client rotation is on, will bootstrap in background
	Jul 25 20:48:53 old-k8s-version-20220725131610-44543 kubelet[34154]: I0725 20:48:53.497471   34154 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 25 20:48:53 old-k8s-version-20220725131610-44543 kubelet[34154]: W0725 20:48:53.498173   34154 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jul 25 20:48:53 old-k8s-version-20220725131610-44543 kubelet[34154]: W0725 20:48:53.498239   34154 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jul 25 20:48:53 old-k8s-version-20220725131610-44543 kubelet[34154]: F0725 20:48:53.498267   34154 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jul 25 20:48:53 old-k8s-version-20220725131610-44543 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 25 20:48:53 old-k8s-version-20220725131610-44543 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 25 20:48:54 old-k8s-version-20220725131610-44543 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1671.
	Jul 25 20:48:54 old-k8s-version-20220725131610-44543 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 25 20:48:54 old-k8s-version-20220725131610-44543 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 25 20:48:54 old-k8s-version-20220725131610-44543 kubelet[34169]: I0725 20:48:54.247952   34169 server.go:410] Version: v1.16.0
	Jul 25 20:48:54 old-k8s-version-20220725131610-44543 kubelet[34169]: I0725 20:48:54.248196   34169 plugins.go:100] No cloud provider specified.
	Jul 25 20:48:54 old-k8s-version-20220725131610-44543 kubelet[34169]: I0725 20:48:54.248207   34169 server.go:773] Client rotation is on, will bootstrap in background
	Jul 25 20:48:54 old-k8s-version-20220725131610-44543 kubelet[34169]: I0725 20:48:54.249997   34169 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 25 20:48:54 old-k8s-version-20220725131610-44543 kubelet[34169]: W0725 20:48:54.250660   34169 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jul 25 20:48:54 old-k8s-version-20220725131610-44543 kubelet[34169]: W0725 20:48:54.250750   34169 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jul 25 20:48:54 old-k8s-version-20220725131610-44543 kubelet[34169]: F0725 20:48:54.250789   34169 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jul 25 20:48:54 old-k8s-version-20220725131610-44543 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 25 20:48:54 old-k8s-version-20220725131610-44543 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 13:48:54.368801   63376 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220725131610-44543 -n old-k8s-version-20220725131610-44543
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220725131610-44543 -n old-k8s-version-20220725131610-44543: exit status 2 (433.078189ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-20220725131610-44543" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (554.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (49.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-20220725134004-44543 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220725134004-44543 -n newest-cni-20220725134004-44543

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220725134004-44543 -n newest-cni-20220725134004-44543: exit status 2 (16.108668163s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-20220725134004-44543 -n newest-cni-20220725134004-44543
E0725 13:41:38.238914   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/functional-20220725122408-44543/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-20220725134004-44543 -n newest-cni-20220725134004-44543: exit status 2 (16.105644802s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-20220725134004-44543 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-darwin-amd64 unpause -p newest-cni-20220725134004-44543 --alsologtostderr -v=1: (1.070867452s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220725134004-44543 -n newest-cni-20220725134004-44543
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-20220725134004-44543 -n newest-cni-20220725134004-44543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220725134004-44543
helpers_test.go:235: (dbg) docker inspect newest-cni-20220725134004-44543:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1fceac213dcb3ffedcd422e31e6e345fa503ac6e174ed2b3243f6a222d7cbd9b",
	        "Created": "2022-07-25T20:40:11.821869037Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 318478,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-25T20:41:02.844266Z",
	            "FinishedAt": "2022-07-25T20:41:00.916699019Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/1fceac213dcb3ffedcd422e31e6e345fa503ac6e174ed2b3243f6a222d7cbd9b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1fceac213dcb3ffedcd422e31e6e345fa503ac6e174ed2b3243f6a222d7cbd9b/hostname",
	        "HostsPath": "/var/lib/docker/containers/1fceac213dcb3ffedcd422e31e6e345fa503ac6e174ed2b3243f6a222d7cbd9b/hosts",
	        "LogPath": "/var/lib/docker/containers/1fceac213dcb3ffedcd422e31e6e345fa503ac6e174ed2b3243f6a222d7cbd9b/1fceac213dcb3ffedcd422e31e6e345fa503ac6e174ed2b3243f6a222d7cbd9b-json.log",
	        "Name": "/newest-cni-20220725134004-44543",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-20220725134004-44543:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-20220725134004-44543",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/915ce4de5b10b3405706bde7e2072562370a2d01906870817917cbe90933b10b-init/diff:/var/lib/docker/overlay2/d54b85ebfe416686b2558e2807bd5eef62d8c3964c35bbc22b4965093a7c57f4/diff:/var/lib/docker/overlay2/5cb3ff6e7921802d23fd6cba2490df2c36857ad3d97eedcf6e537d1527a37f60/diff:/var/lib/docker/overlay2/e9074b1b2294cc5a00d3237646a71c4d30d89748eadbae26dd532d6a475c9851/diff:/var/lib/docker/overlay2/df8929c4859ecf2c8807040b0174a3f0dfa8da01532ea961cd7f4f2f0ccfbfc2/diff:/var/lib/docker/overlay2/177329168f9a1043ce94a2b7b32958779e8f685e85abf8e4c335d03d23295b26/diff:/var/lib/docker/overlay2/9eef19133d2d5e0111523234481658c1f73d7552db59842ad9ca687debda8fb3/diff:/var/lib/docker/overlay2/5c3a4cdf915d1e65e3003feefb9b9e649b432658e093e04971d5cb79e610dba9/diff:/var/lib/docker/overlay2/d1aeff9fecf983db994270b4acf2a2310c790c64d42c58d436aec3238f88b36c/diff:/var/lib/docker/overlay2/580dc19745fb3c1352983857ac5a555954893045702e7867e651e15a2d6d8732/diff:/var/lib/docker/overlay2/6c77b3
2028ad3ef74c2faf89b5080529c0391907ee9c1d7bbc00c25a5340238b/diff:/var/lib/docker/overlay2/e9e20374d05015c7a4f19eb8e14e663122cd2aef66d761bedfdef138551230b8/diff:/var/lib/docker/overlay2/cd6d64933d189089a7aaf08d7c7db50b89cc981d03b8272e4fdedb65495c3e4c/diff:/var/lib/docker/overlay2/768244a14ae888bb2a08747182035c24bd2a6a7a67f1ddae7b4e60efccb01339/diff:/var/lib/docker/overlay2/f90a95060678580a1a90ee9a563996249f7606bffcd06b680ef15234b56ab4a6/diff:/var/lib/docker/overlay2/49310159b9be5ac9bfa1dca18d816535ceb12e228963adafcf71f9d7d52d95ec/diff:/var/lib/docker/overlay2/cef5c40c08cddf01fe5ea252f4a28586c0bd1296ddc38fc37fc7e34af83c3c30/diff:/var/lib/docker/overlay2/b31bdcb9b1a69c5230400a4e1f5650c1cc8f9e27e673ff43708f9a450106733e/diff:/var/lib/docker/overlay2/899516301b3ffc21aee7cb9984f97d944fe5af0f2cd0bc5a5895bf187e4bff72/diff:/var/lib/docker/overlay2/f1bd0d2fc2ce458c0b6364dde56610c3d26bf8dc2cca2ae4e34a46f173f44595/diff:/var/lib/docker/overlay2/9f0da14f9c19d5a719554996b34f3ef80a0378921181aaf89ca41339ccc5bee3/diff:/var/lib/d
ocker/overlay2/4d6e8c40f7c65cbcfa827b62273eb37d2a0bb0407ee161b80523f11c157a52d4/diff:/var/lib/docker/overlay2/434355bebe182fb5c97b07ad8cf7a59b5474f8bc71379ff4f9690ffe31837e63/diff:/var/lib/docker/overlay2/128fb199261eb4fea9e79111861223ddb0d84438454cb7c0b9a61cc73d0796c3/diff:/var/lib/docker/overlay2/11c9cb7e5f15ac43115cb739da1135a53e08dc94ee1da27441e1d6c79eb73e62/diff:/var/lib/docker/overlay2/5e6bf225209b025da3db5a2d8862c3a0ee64417fa809d026e010644fc6619b48/diff:/var/lib/docker/overlay2/265a2d12f86abb8a607a06ec6f225186dfe622f7e23cbbf146a2ceffc7bc9bdc/diff:/var/lib/docker/overlay2/e9140b8aea381db0faf833cd2d12ff40267febe63f7c6bc2fa27384dbadf89bc/diff:/var/lib/docker/overlay2/07d7d0ab38cb08c95279e0706a65a6a4aeff8b5e72d559f8bc3dfcc0895654d6/diff:/var/lib/docker/overlay2/3a3d8346fb066863283ad39cf7f43615d7a1e99dc12aa7e8892d85d1c550bc63/diff:/var/lib/docker/overlay2/e5317b212815c832ebd2451ddd6e1f6922f350c5c1651ccfef7c5a8f34b7c1e3/diff:/var/lib/docker/overlay2/3bd7d98f0f0bf7ad2e29344a6ab4ca3211616e390da35077e76565c2074
df852/diff:/var/lib/docker/overlay2/ff63d2045cce1bb51b201079bba9d2b2a666b7331699b9407801b2fc00792bb3/diff:/var/lib/docker/overlay2/545bdf3f77f20e3db68875eda0a8829facb605119a630956ab79323688a3562a/diff:/var/lib/docker/overlay2/8f9b40124ec5ada59184358938542e028ca1e234ef1f6bce1816becf6ec8354e/diff:/var/lib/docker/overlay2/71b919cd2ece4207294cb577adab54fdba87cd9f7fdca1a53d6f376f87f11151/diff:/var/lib/docker/overlay2/b2d4a34fe28bd395d9b4ba0120173fc9df90c36a8a60a309ec088eca8930768c/diff:/var/lib/docker/overlay2/e986f180cbc34391c1ea6665283859b4c3c3061156ea1ff7196ffd869741e591/diff:/var/lib/docker/overlay2/cb98df203d42ea558f11fe84300b687cb65e6f3401b377a4466c9c8ef276ec59/diff:/var/lib/docker/overlay2/6371f9c0528fb1fb4a17bfcf3a34877fdd6d43dca1b5f06aff998d43d2010c46/diff:/var/lib/docker/overlay2/79ceef8e89c4094715c05bafb8c1427d09fb4a99f71f1a6186299303af352c98/diff:/var/lib/docker/overlay2/e034e76eeaff4e68d5158ffe54b55d122124ad82998be242fa08bec3010a0e64/diff",
	                "MergedDir": "/var/lib/docker/overlay2/915ce4de5b10b3405706bde7e2072562370a2d01906870817917cbe90933b10b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/915ce4de5b10b3405706bde7e2072562370a2d01906870817917cbe90933b10b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/915ce4de5b10b3405706bde7e2072562370a2d01906870817917cbe90933b10b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-20220725134004-44543",
	                "Source": "/var/lib/docker/volumes/newest-cni-20220725134004-44543/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-20220725134004-44543",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-20220725134004-44543",
	                "name.minikube.sigs.k8s.io": "newest-cni-20220725134004-44543",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "481db35061c822c7c75df96b8c600d7e081d8c5b2f26b9ba4e5d2e5a885f6c8f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "61130"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "61131"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "61132"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "61133"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "61134"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/481db35061c8",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-20220725134004-44543": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "1fceac213dcb",
	                        "newest-cni-20220725134004-44543"
	                    ],
	                    "NetworkID": "b58b042d3e0330688fdf5ac0e347631e9444253fe03e9422e5a8023128b5d083",
	                    "EndpointID": "871f98eaca7ba77da270a748d3a0404ce10fbf3a7381db692d2ff0f9b73c2ea6",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220725134004-44543 -n newest-cni-20220725134004-44543
helpers_test.go:244: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p newest-cni-20220725134004-44543 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p newest-cni-20220725134004-44543 logs -n 25: (4.053884015s)
helpers_test.go:252: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                         | embed-certs-20220725132539-44543                | jenkins | v1.26.0 | 25 Jul 22 13:26 PDT | 25 Jul 22 13:31 PDT |
	|         | embed-certs-20220725132539-44543                           |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                     |                     |
	|         | --wait=true --embed-certs                                  |                                                 |         |         |                     |                     |
	|         | --driver=docker                                            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | embed-certs-20220725132539-44543                | jenkins | v1.26.0 | 25 Jul 22 13:32 PDT | 25 Jul 22 13:32 PDT |
	|         | embed-certs-20220725132539-44543                           |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | embed-certs-20220725132539-44543                | jenkins | v1.26.0 | 25 Jul 22 13:32 PDT | 25 Jul 22 13:32 PDT |
	|         | embed-certs-20220725132539-44543                           |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | embed-certs-20220725132539-44543                | jenkins | v1.26.0 | 25 Jul 22 13:32 PDT | 25 Jul 22 13:32 PDT |
	|         | embed-certs-20220725132539-44543                           |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | embed-certs-20220725132539-44543                | jenkins | v1.26.0 | 25 Jul 22 13:32 PDT | 25 Jul 22 13:32 PDT |
	|         | embed-certs-20220725132539-44543                           |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | embed-certs-20220725132539-44543                | jenkins | v1.26.0 | 25 Jul 22 13:32 PDT | 25 Jul 22 13:32 PDT |
	|         | embed-certs-20220725132539-44543                           |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | disable-driver-mounts-20220725133257-44543      | jenkins | v1.26.0 | 25 Jul 22 13:32 PDT | 25 Jul 22 13:32 PDT |
	|         | disable-driver-mounts-20220725133257-44543                 |                                                 |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:32 PDT | 25 Jul 22 13:33 PDT |
	|         | default-k8s-different-port-20220725133258-44543            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:33 PDT | 25 Jul 22 13:33 PDT |
	|         | default-k8s-different-port-20220725133258-44543            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:33 PDT | 25 Jul 22 13:34 PDT |
	|         | default-k8s-different-port-20220725133258-44543            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:34 PDT | 25 Jul 22 13:34 PDT |
	|         | default-k8s-different-port-20220725133258-44543            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:34 PDT | 25 Jul 22 13:39 PDT |
	|         | default-k8s-different-port-20220725133258-44543            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:39 PDT | 25 Jul 22 13:39 PDT |
	|         | default-k8s-different-port-20220725133258-44543            |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:39 PDT | 25 Jul 22 13:39 PDT |
	|         | default-k8s-different-port-20220725133258-44543            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:39 PDT | 25 Jul 22 13:39 PDT |
	|         | default-k8s-different-port-20220725133258-44543            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:40 PDT | 25 Jul 22 13:40 PDT |
	|         | default-k8s-different-port-20220725133258-44543            |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:40 PDT | 25 Jul 22 13:40 PDT |
	|         | default-k8s-different-port-20220725133258-44543            |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220725134004-44543 --memory=2200           | newest-cni-20220725134004-44543                 | jenkins | v1.26.0 | 25 Jul 22 13:40 PDT | 25 Jul 22 13:40 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.24.2              |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220725134004-44543                 | jenkins | v1.26.0 | 25 Jul 22 13:40 PDT | 25 Jul 22 13:40 PDT |
	|         | newest-cni-20220725134004-44543                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | newest-cni-20220725134004-44543                 | jenkins | v1.26.0 | 25 Jul 22 13:40 PDT | 25 Jul 22 13:41 PDT |
	|         | newest-cni-20220725134004-44543                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220725134004-44543                 | jenkins | v1.26.0 | 25 Jul 22 13:41 PDT | 25 Jul 22 13:41 PDT |
	|         | newest-cni-20220725134004-44543                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220725134004-44543 --memory=2200           | newest-cni-20220725134004-44543                 | jenkins | v1.26.0 | 25 Jul 22 13:41 PDT | 25 Jul 22 13:41 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.24.2              |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220725134004-44543                 | jenkins | v1.26.0 | 25 Jul 22 13:41 PDT | 25 Jul 22 13:41 PDT |
	|         | newest-cni-20220725134004-44543                            |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | newest-cni-20220725134004-44543                 | jenkins | v1.26.0 | 25 Jul 22 13:41 PDT | 25 Jul 22 13:41 PDT |
	|         | newest-cni-20220725134004-44543                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | newest-cni-20220725134004-44543                 | jenkins | v1.26.0 | 25 Jul 22 13:41 PDT | 25 Jul 22 13:41 PDT |
	|         | newest-cni-20220725134004-44543                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/25 13:41:01
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 13:41:01.604769   62641 out.go:296] Setting OutFile to fd 1 ...
	I0725 13:41:01.604901   62641 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 13:41:01.604906   62641 out.go:309] Setting ErrFile to fd 2...
	I0725 13:41:01.604910   62641 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 13:41:01.605019   62641 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/bin
	I0725 13:41:01.605467   62641 out.go:303] Setting JSON to false
	I0725 13:41:01.620475   62641 start.go:115] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":16833,"bootTime":1658764828,"procs":356,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0725 13:41:01.620568   62641 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0725 13:41:01.642382   62641 out.go:177] * [newest-cni-20220725134004-44543] minikube v1.26.0 on Darwin 12.4
	I0725 13:41:01.664186   62641 notify.go:193] Checking for updates...
	I0725 13:41:01.685918   62641 out.go:177]   - MINIKUBE_LOCATION=14555
	I0725 13:41:01.707967   62641 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	I0725 13:41:01.729065   62641 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 13:41:01.751081   62641 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 13:41:01.772267   62641 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube
	I0725 13:41:01.794758   62641 config.go:178] Loaded profile config "newest-cni-20220725134004-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0725 13:41:01.795416   62641 driver.go:365] Setting default libvirt URI to qemu:///system
	I0725 13:41:01.864839   62641 docker.go:137] docker version: linux-20.10.17
	I0725 13:41:01.864995   62641 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 13:41:01.998821   62641 info.go:265] docker info: {ID:DEPZ:X5EQ:JUVI:7TAY:IXPP:M3QT:KGJK:NK3N:YSWM:HAHS:YLUF:RBE6 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 20:41:01.926956783 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 13:41:02.020502   62641 out.go:177] * Using the docker driver based on existing profile
	I0725 13:41:02.041400   62641 start.go:284] selected driver: docker
	I0725 13:41:02.041426   62641 start.go:808] validating driver "docker" against &{Name:newest-cni-20220725134004-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:newest-cni-20220725134004-44543 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:tru
e extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 13:41:02.041590   62641 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 13:41:02.044316   62641 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 13:41:02.175804   62641 info.go:265] docker info: {ID:DEPZ:X5EQ:JUVI:7TAY:IXPP:M3QT:KGJK:NK3N:YSWM:HAHS:YLUF:RBE6 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 20:41:02.106306455 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 13:41:02.175969   62641 start_flags.go:872] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0725 13:41:02.175988   62641 cni.go:95] Creating CNI manager for ""
	I0725 13:41:02.175998   62641 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 13:41:02.176021   62641 start_flags.go:310] config:
	{Name:newest-cni-20220725134004-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:newest-cni-20220725134004-44543 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 13:41:02.219416   62641 out.go:177] * Starting control plane node newest-cni-20220725134004-44543 in cluster newest-cni-20220725134004-44543
	I0725 13:41:02.240914   62641 cache.go:120] Beginning downloading kic base image for docker with docker
	I0725 13:41:02.263062   62641 out.go:177] * Pulling base image ...
	I0725 13:41:02.305845   62641 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0725 13:41:02.305897   62641 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0725 13:41:02.305926   62641 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4
	I0725 13:41:02.305951   62641 cache.go:57] Caching tarball of preloaded images
	I0725 13:41:02.306134   62641 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0725 13:41:02.306156   62641 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.2 on docker
	I0725 13:41:02.307199   62641 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/newest-cni-20220725134004-44543/config.json ...
	I0725 13:41:02.370228   62641 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0725 13:41:02.370244   62641 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0725 13:41:02.370255   62641 cache.go:208] Successfully downloaded all kic artifacts
	I0725 13:41:02.370307   62641 start.go:370] acquiring machines lock for newest-cni-20220725134004-44543: {Name:mk938127dcd35e39de5792da4cde1f6031a6baad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 13:41:02.370384   62641 start.go:374] acquired machines lock for "newest-cni-20220725134004-44543" in 57.562µs
	I0725 13:41:02.370403   62641 start.go:95] Skipping create...Using existing machine configuration
	I0725 13:41:02.370414   62641 fix.go:55] fixHost starting: 
	I0725 13:41:02.370643   62641 cli_runner.go:164] Run: docker container inspect newest-cni-20220725134004-44543 --format={{.State.Status}}
	I0725 13:41:02.437623   62641 fix.go:103] recreateIfNeeded on newest-cni-20220725134004-44543: state=Stopped err=<nil>
	W0725 13:41:02.437661   62641 fix.go:129] unexpected machine state, will restart: <nil>
	I0725 13:41:02.481261   62641 out.go:177] * Restarting existing docker container for "newest-cni-20220725134004-44543" ...
	I0725 13:41:02.502954   62641 cli_runner.go:164] Run: docker start newest-cni-20220725134004-44543
	I0725 13:41:02.847905   62641 cli_runner.go:164] Run: docker container inspect newest-cni-20220725134004-44543 --format={{.State.Status}}
	I0725 13:41:02.921451   62641 kic.go:415] container "newest-cni-20220725134004-44543" state is running.
	I0725 13:41:02.922010   62641 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220725134004-44543
	I0725 13:41:02.999381   62641 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/newest-cni-20220725134004-44543/config.json ...
	I0725 13:41:02.999787   62641 machine.go:88] provisioning docker machine ...
	I0725 13:41:02.999811   62641 ubuntu.go:169] provisioning hostname "newest-cni-20220725134004-44543"
	I0725 13:41:02.999890   62641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725134004-44543
	I0725 13:41:03.075991   62641 main.go:134] libmachine: Using SSH client type: native
	I0725 13:41:03.076182   62641 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 61130 <nil> <nil>}
	I0725 13:41:03.076197   62641 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220725134004-44543 && echo "newest-cni-20220725134004-44543" | sudo tee /etc/hostname
	I0725 13:41:03.206919   62641 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-20220725134004-44543
	
	I0725 13:41:03.206994   62641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725134004-44543
	I0725 13:41:03.283133   62641 main.go:134] libmachine: Using SSH client type: native
	I0725 13:41:03.283305   62641 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 61130 <nil> <nil>}
	I0725 13:41:03.283326   62641 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220725134004-44543' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220725134004-44543/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220725134004-44543' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 13:41:03.405138   62641 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 13:41:03.405172   62641 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube}
	I0725 13:41:03.405208   62641 ubuntu.go:177] setting up certificates
	I0725 13:41:03.405220   62641 provision.go:83] configureAuth start
	I0725 13:41:03.405313   62641 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220725134004-44543
	I0725 13:41:03.482288   62641 provision.go:138] copyHostCerts
	I0725 13:41:03.482371   62641 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem, removing ...
	I0725 13:41:03.482380   62641 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem
	I0725 13:41:03.482467   62641 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem (1082 bytes)
	I0725 13:41:03.482700   62641 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem, removing ...
	I0725 13:41:03.482708   62641 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem
	I0725 13:41:03.482765   62641 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem (1123 bytes)
	I0725 13:41:03.482898   62641 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem, removing ...
	I0725 13:41:03.482906   62641 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem
	I0725 13:41:03.482968   62641 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem (1675 bytes)
	I0725 13:41:03.483101   62641 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20220725134004-44543 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220725134004-44543]
	I0725 13:41:03.601079   62641 provision.go:172] copyRemoteCerts
	I0725 13:41:03.601148   62641 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 13:41:03.601194   62641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725134004-44543
	I0725 13:41:03.676833   62641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61130 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/newest-cni-20220725134004-44543/id_rsa Username:docker}
	I0725 13:41:03.765491   62641 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0725 13:41:03.782662   62641 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0725 13:41:03.799217   62641 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0725 13:41:03.815241   62641 provision.go:86] duration metric: configureAuth took 409.996098ms
	I0725 13:41:03.815253   62641 ubuntu.go:193] setting minikube options for container-runtime
	I0725 13:41:03.815429   62641 config.go:178] Loaded profile config "newest-cni-20220725134004-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0725 13:41:03.815493   62641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725134004-44543
	I0725 13:41:03.886548   62641 main.go:134] libmachine: Using SSH client type: native
	I0725 13:41:03.886702   62641 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 61130 <nil> <nil>}
	I0725 13:41:03.886713   62641 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 13:41:04.008823   62641 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0725 13:41:04.008850   62641 ubuntu.go:71] root file system type: overlay
	I0725 13:41:04.008999   62641 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 13:41:04.009069   62641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725134004-44543
	I0725 13:41:04.080213   62641 main.go:134] libmachine: Using SSH client type: native
	I0725 13:41:04.080363   62641 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 61130 <nil> <nil>}
	I0725 13:41:04.080412   62641 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 13:41:04.215381   62641 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 13:41:04.215466   62641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725134004-44543
	I0725 13:41:04.287477   62641 main.go:134] libmachine: Using SSH client type: native
	I0725 13:41:04.287633   62641 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 61130 <nil> <nil>}
	I0725 13:41:04.287646   62641 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 13:41:04.413431   62641 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 13:41:04.413445   62641 machine.go:91] provisioned docker machine in 1.413607753s
	I0725 13:41:04.413455   62641 start.go:307] post-start starting for "newest-cni-20220725134004-44543" (driver="docker")
	I0725 13:41:04.413460   62641 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 13:41:04.413520   62641 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 13:41:04.413564   62641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725134004-44543
	I0725 13:41:04.483819   62641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61130 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/newest-cni-20220725134004-44543/id_rsa Username:docker}
	I0725 13:41:04.573866   62641 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 13:41:04.577512   62641 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0725 13:41:04.577532   62641 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0725 13:41:04.577543   62641 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0725 13:41:04.577548   62641 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0725 13:41:04.577559   62641 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/addons for local assets ...
	I0725 13:41:04.577671   62641 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files for local assets ...
	I0725 13:41:04.577813   62641 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem -> 445432.pem in /etc/ssl/certs
	I0725 13:41:04.577999   62641 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 13:41:04.585135   62641 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem --> /etc/ssl/certs/445432.pem (1708 bytes)
	I0725 13:41:04.601851   62641 start.go:310] post-start completed in 188.379013ms
	I0725 13:41:04.601920   62641 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 13:41:04.601965   62641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725134004-44543
	I0725 13:41:04.671831   62641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61130 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/newest-cni-20220725134004-44543/id_rsa Username:docker}
	I0725 13:41:04.762233   62641 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0725 13:41:04.766670   62641 fix.go:57] fixHost completed within 2.396186108s
	I0725 13:41:04.766683   62641 start.go:82] releasing machines lock for "newest-cni-20220725134004-44543", held for 2.396220574s
	I0725 13:41:04.766757   62641 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220725134004-44543
	I0725 13:41:04.837640   62641 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0725 13:41:04.837643   62641 ssh_runner.go:195] Run: systemctl --version
	I0725 13:41:04.837724   62641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725134004-44543
	I0725 13:41:04.837723   62641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725134004-44543
	I0725 13:41:04.913021   62641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61130 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/newest-cni-20220725134004-44543/id_rsa Username:docker}
	I0725 13:41:04.915823   62641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61130 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/newest-cni-20220725134004-44543/id_rsa Username:docker}
	I0725 13:41:04.996139   62641 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0725 13:41:05.213499   62641 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (233 bytes)
	I0725 13:41:05.226262   62641 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 13:41:05.290328   62641 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0725 13:41:05.366103   62641 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 13:41:05.375827   62641 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0725 13:41:05.375888   62641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 13:41:05.385531   62641 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 13:41:05.398027   62641 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 13:41:05.465167   62641 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 13:41:05.534527   62641 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 13:41:05.599370   62641 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 13:41:05.829754   62641 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0725 13:41:05.899999   62641 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 13:41:05.969149   62641 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0725 13:41:05.978813   62641 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0725 13:41:05.978890   62641 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0725 13:41:05.982938   62641 start.go:471] Will wait 60s for crictl version
	I0725 13:41:05.982986   62641 ssh_runner.go:195] Run: sudo crictl version
	I0725 13:41:06.011832   62641 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0725 13:41:06.011903   62641 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 13:41:06.045117   62641 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 13:41:06.102152   62641 out.go:204] * Preparing Kubernetes v1.24.2 on Docker 20.10.17 ...
	I0725 13:41:06.102359   62641 cli_runner.go:164] Run: docker exec -t newest-cni-20220725134004-44543 dig +short host.docker.internal
	I0725 13:41:06.235422   62641 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0725 13:41:06.235509   62641 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0725 13:41:06.239630   62641 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 13:41:06.249184   62641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220725134004-44543
	I0725 13:41:06.343694   62641 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0725 13:41:06.365259   62641 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0725 13:41:06.365400   62641 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 13:41:06.396834   62641 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.2
	k8s.gcr.io/kube-scheduler:v1.24.2
	k8s.gcr.io/kube-proxy:v1.24.2
	k8s.gcr.io/kube-controller-manager:v1.24.2
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0725 13:41:06.396848   62641 docker.go:542] Images already preloaded, skipping extraction
	I0725 13:41:06.396922   62641 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 13:41:06.425279   62641 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.2
	k8s.gcr.io/kube-scheduler:v1.24.2
	k8s.gcr.io/kube-proxy:v1.24.2
	k8s.gcr.io/kube-controller-manager:v1.24.2
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0725 13:41:06.425301   62641 cache_images.go:84] Images are preloaded, skipping loading
	I0725 13:41:06.425374   62641 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0725 13:41:06.500080   62641 cni.go:95] Creating CNI manager for ""
	I0725 13:41:06.500094   62641 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 13:41:06.500110   62641 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0725 13:41:06.500125   62641 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20220725134004-44543 NodeName:newest-cni-20220725134004-44543 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:fal
se] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0725 13:41:06.500228   62641 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "newest-cni-20220725134004-44543"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 13:41:06.500302   62641 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20220725134004-44543 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.2 ClusterName:newest-cni-20220725134004-44543 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0725 13:41:06.500378   62641 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
	I0725 13:41:06.508168   62641 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 13:41:06.508230   62641 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 13:41:06.516145   62641 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (530 bytes)
	I0725 13:41:06.529064   62641 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 13:41:06.542568   62641 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2189 bytes)
	I0725 13:41:06.555691   62641 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0725 13:41:06.559354   62641 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 13:41:06.568796   62641 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/newest-cni-20220725134004-44543 for IP: 192.168.76.2
	I0725 13:41:06.568905   62641 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.key
	I0725 13:41:06.568957   62641 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.key
	I0725 13:41:06.569029   62641 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/newest-cni-20220725134004-44543/client.key
	I0725 13:41:06.569092   62641 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/newest-cni-20220725134004-44543/apiserver.key.31bdca25
	I0725 13:41:06.569142   62641 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/newest-cni-20220725134004-44543/proxy-client.key
	I0725 13:41:06.569349   62641 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543.pem (1338 bytes)
	W0725 13:41:06.569386   62641 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543_empty.pem, impossibly tiny 0 bytes
	I0725 13:41:06.569402   62641 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 13:41:06.569437   62641 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem (1082 bytes)
	I0725 13:41:06.569468   62641 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem (1123 bytes)
	I0725 13:41:06.569497   62641 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem (1675 bytes)
	I0725 13:41:06.569555   62641 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem (1708 bytes)
	I0725 13:41:06.570104   62641 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/newest-cni-20220725134004-44543/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0725 13:41:06.586702   62641 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/newest-cni-20220725134004-44543/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 13:41:06.602950   62641 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/newest-cni-20220725134004-44543/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 13:41:06.619736   62641 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/newest-cni-20220725134004-44543/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 13:41:06.642804   62641 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 13:41:06.658996   62641 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 13:41:06.675332   62641 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 13:41:06.691752   62641 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0725 13:41:06.708295   62641 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543.pem --> /usr/share/ca-certificates/44543.pem (1338 bytes)
	I0725 13:41:06.724751   62641 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem --> /usr/share/ca-certificates/445432.pem (1708 bytes)
	I0725 13:41:06.741061   62641 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 13:41:06.757627   62641 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 13:41:06.770205   62641 ssh_runner.go:195] Run: openssl version
	I0725 13:41:06.775104   62641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/445432.pem && ln -fs /usr/share/ca-certificates/445432.pem /etc/ssl/certs/445432.pem"
	I0725 13:41:06.782798   62641 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/445432.pem
	I0725 13:41:06.786665   62641 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 25 19:24 /usr/share/ca-certificates/445432.pem
	I0725 13:41:06.786710   62641 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/445432.pem
	I0725 13:41:06.791947   62641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/445432.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 13:41:06.798822   62641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 13:41:06.806371   62641 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 13:41:06.810124   62641 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 25 19:19 /usr/share/ca-certificates/minikubeCA.pem
	I0725 13:41:06.810165   62641 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 13:41:06.815270   62641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 13:41:06.822348   62641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44543.pem && ln -fs /usr/share/ca-certificates/44543.pem /etc/ssl/certs/44543.pem"
	I0725 13:41:06.829900   62641 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44543.pem
	I0725 13:41:06.833621   62641 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 25 19:24 /usr/share/ca-certificates/44543.pem
	I0725 13:41:06.833658   62641 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44543.pem
	I0725 13:41:06.838937   62641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/44543.pem /etc/ssl/certs/51391683.0"
	I0725 13:41:06.845919   62641 kubeadm.go:395] StartCluster: {Name:newest-cni-20220725134004-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:newest-cni-20220725134004-44543 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubele
t:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 13:41:06.846015   62641 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 13:41:06.874420   62641 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 13:41:06.882063   62641 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0725 13:41:06.882077   62641 kubeadm.go:626] restartCluster start
	I0725 13:41:06.882121   62641 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 13:41:06.888659   62641 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:06.888716   62641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220725134004-44543
	I0725 13:41:06.960144   62641 kubeconfig.go:116] verify returned: extract IP: "newest-cni-20220725134004-44543" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	I0725 13:41:06.960306   62641 kubeconfig.go:127] "newest-cni-20220725134004-44543" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig - will repair!
	I0725 13:41:06.960639   62641 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig: {Name:mk33e723b045d7cbb2c524ed31a7e78a4a0b6415 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 13:41:06.961783   62641 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 13:41:06.969405   62641 api_server.go:165] Checking apiserver status ...
	I0725 13:41:06.969452   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:41:06.977546   62641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:07.177691   62641 api_server.go:165] Checking apiserver status ...
	I0725 13:41:07.177841   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:41:07.188294   62641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:07.379725   62641 api_server.go:165] Checking apiserver status ...
	I0725 13:41:07.379891   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:41:07.390695   62641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:07.578354   62641 api_server.go:165] Checking apiserver status ...
	I0725 13:41:07.578503   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:41:07.589390   62641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:07.778053   62641 api_server.go:165] Checking apiserver status ...
	I0725 13:41:07.778208   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:41:07.788555   62641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:07.978483   62641 api_server.go:165] Checking apiserver status ...
	I0725 13:41:07.978583   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:41:07.987167   62641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:08.179736   62641 api_server.go:165] Checking apiserver status ...
	I0725 13:41:08.179873   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:41:08.190170   62641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:08.379744   62641 api_server.go:165] Checking apiserver status ...
	I0725 13:41:08.379930   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:41:08.390692   62641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:08.579742   62641 api_server.go:165] Checking apiserver status ...
	I0725 13:41:08.579884   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:41:08.590517   62641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:08.778483   62641 api_server.go:165] Checking apiserver status ...
	I0725 13:41:08.778568   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:41:08.787637   62641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:08.978431   62641 api_server.go:165] Checking apiserver status ...
	I0725 13:41:08.978579   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:41:08.988922   62641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:09.179779   62641 api_server.go:165] Checking apiserver status ...
	I0725 13:41:09.179927   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:41:09.191022   62641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:09.378051   62641 api_server.go:165] Checking apiserver status ...
	I0725 13:41:09.378196   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:41:09.388421   62641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:09.577880   62641 api_server.go:165] Checking apiserver status ...
	I0725 13:41:09.578004   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:41:09.588394   62641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:09.777861   62641 api_server.go:165] Checking apiserver status ...
	I0725 13:41:09.778000   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:41:09.788393   62641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:09.979800   62641 api_server.go:165] Checking apiserver status ...
	I0725 13:41:09.979933   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:41:09.990537   62641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:09.990547   62641 api_server.go:165] Checking apiserver status ...
	I0725 13:41:09.990588   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:41:09.998309   62641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:09.998319   62641 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0725 13:41:09.998326   62641 kubeadm.go:1092] stopping kube-system containers ...
	I0725 13:41:09.998375   62641 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 13:41:10.028922   62641 docker.go:443] Stopping containers: [2e12297de868 a9ff123b784e 2b1a2f7d82bf 226f6c6e0075 b9f0d14e47ae c538fc012392 2322278f6a58 775d14994103 433c6e459edd 17bf2e8fefb4 c5c26c8d67b9 457e1a17e981 50acabee2bf5 b7b19f1a5a2d 662ac0f86cb7 f89737e8ee78]
	I0725 13:41:10.028995   62641 ssh_runner.go:195] Run: docker stop 2e12297de868 a9ff123b784e 2b1a2f7d82bf 226f6c6e0075 b9f0d14e47ae c538fc012392 2322278f6a58 775d14994103 433c6e459edd 17bf2e8fefb4 c5c26c8d67b9 457e1a17e981 50acabee2bf5 b7b19f1a5a2d 662ac0f86cb7 f89737e8ee78
	I0725 13:41:10.059285   62641 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 13:41:10.069060   62641 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 13:41:10.076097   62641 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jul 25 20:40 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jul 25 20:40 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Jul 25 20:40 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jul 25 20:40 /etc/kubernetes/scheduler.conf
	
	I0725 13:41:10.076143   62641 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 13:41:10.082864   62641 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 13:41:10.089720   62641 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 13:41:10.096488   62641 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:10.096531   62641 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 13:41:10.103289   62641 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 13:41:10.110095   62641 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:10.110147   62641 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 13:41:10.116727   62641 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 13:41:10.123750   62641 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0725 13:41:10.123759   62641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:41:10.167626   62641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:41:11.147058   62641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:41:11.324377   62641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:41:11.369837   62641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:41:11.433701   62641 api_server.go:51] waiting for apiserver process to appear ...
	I0725 13:41:11.433762   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:41:11.946640   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:41:12.445830   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:41:12.944870   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:41:12.956307   62641 api_server.go:71] duration metric: took 1.522564033s to wait for apiserver process to appear ...
	I0725 13:41:12.956326   62641 api_server.go:87] waiting for apiserver healthz status ...
	I0725 13:41:12.956340   62641 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61134/healthz ...
	I0725 13:41:15.637409   62641 api_server.go:266] https://127.0.0.1:61134/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0725 13:41:15.637424   62641 api_server.go:102] status: https://127.0.0.1:61134/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0725 13:41:16.138026   62641 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61134/healthz ...
	I0725 13:41:16.146708   62641 api_server.go:266] https://127.0.0.1:61134/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 13:41:16.146728   62641 api_server.go:102] status: https://127.0.0.1:61134/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 13:41:16.637608   62641 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61134/healthz ...
	I0725 13:41:16.642891   62641 api_server.go:266] https://127.0.0.1:61134/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 13:41:16.642907   62641 api_server.go:102] status: https://127.0.0.1:61134/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 13:41:17.138172   62641 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61134/healthz ...
	I0725 13:41:17.144715   62641 api_server.go:266] https://127.0.0.1:61134/healthz returned 200:
	ok
	I0725 13:41:17.151405   62641 api_server.go:140] control plane version: v1.24.2
	I0725 13:41:17.151419   62641 api_server.go:130] duration metric: took 4.194962023s to wait for apiserver health ...
	I0725 13:41:17.151425   62641 cni.go:95] Creating CNI manager for ""
	I0725 13:41:17.151435   62641 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 13:41:17.151449   62641 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 13:41:17.160753   62641 system_pods.go:59] 9 kube-system pods found
	I0725 13:41:17.160773   62641 system_pods.go:61] "coredns-6d4b75cb6d-hbn6k" [fc055ddb-e646-4d07-b88b-583f467837dd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 13:41:17.160779   62641 system_pods.go:61] "coredns-6d4b75cb6d-x9jqp" [fa2d6b5c-bb0c-4fc9-9443-d933aed66032] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 13:41:17.160785   62641 system_pods.go:61] "etcd-newest-cni-20220725134004-44543" [fadf380e-3d9e-49f1-b1c3-2802743dcb63] Running
	I0725 13:41:17.160789   62641 system_pods.go:61] "kube-apiserver-newest-cni-20220725134004-44543" [4c112752-e4f6-477a-9489-ff1a7b1a92e3] Running
	I0725 13:41:17.160796   62641 system_pods.go:61] "kube-controller-manager-newest-cni-20220725134004-44543" [8a1cdd56-6f7e-4d69-ae6e-8260c02c5acc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0725 13:41:17.160802   62641 system_pods.go:61] "kube-proxy-mm6ph" [820f3eb0-aba6-415b-a884-b67741ece355] Running
	I0725 13:41:17.160807   62641 system_pods.go:61] "kube-scheduler-newest-cni-20220725134004-44543" [06d1a7e9-4b67-47fa-b62c-eebb4e5067fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 13:41:17.160812   62641 system_pods.go:61] "metrics-server-5c6f97fb75-92v57" [32a2b35f-ed5d-4e41-9f58-4b5e119d8bb3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 13:41:17.160817   62641 system_pods.go:61] "storage-provisioner" [d910dec0-c09f-4225-810a-5a5d773f923b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0725 13:41:17.160820   62641 system_pods.go:74] duration metric: took 9.367171ms to wait for pod list to return data ...
	I0725 13:41:17.160826   62641 node_conditions.go:102] verifying NodePressure condition ...
	I0725 13:41:17.164505   62641 node_conditions.go:122] node storage ephemeral capacity is 107077304Ki
	I0725 13:41:17.164523   62641 node_conditions.go:123] node cpu capacity is 6
	I0725 13:41:17.164532   62641 node_conditions.go:105] duration metric: took 3.701498ms to run NodePressure ...
	I0725 13:41:17.164543   62641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:41:17.332026   62641 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 13:41:17.339854   62641 ops.go:34] apiserver oom_adj: -16
	I0725 13:41:17.339874   62641 kubeadm.go:630] restartCluster took 10.457469712s
	I0725 13:41:17.339885   62641 kubeadm.go:397] StartCluster complete in 10.493656318s
	I0725 13:41:17.339902   62641 settings.go:142] acquiring lock: {Name:mk9b5e011e5806157f9a122f8c65a6cc16a3d9e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 13:41:17.339979   62641 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	I0725 13:41:17.340560   62641 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig: {Name:mk33e723b045d7cbb2c524ed31a7e78a4a0b6415 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 13:41:17.343616   62641 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220725134004-44543" rescaled to 1
	I0725 13:41:17.343649   62641 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 13:41:17.343669   62641 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0725 13:41:17.343676   62641 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0725 13:41:17.343824   62641 config.go:178] Loaded profile config "newest-cni-20220725134004-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0725 13:41:17.367695   62641 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220725134004-44543"
	I0725 13:41:17.367523   62641 out.go:177] * Verifying Kubernetes components...
	I0725 13:41:17.367695   62641 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220725134004-44543"
	I0725 13:41:17.367750   62641 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220725134004-44543"
	I0725 13:41:17.367698   62641 addons.go:65] Setting dashboard=true in profile "newest-cni-20220725134004-44543"
	I0725 13:41:17.367839   62641 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220725134004-44543"
	I0725 13:41:17.426431   62641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 13:41:17.426434   62641 addons.go:153] Setting addon dashboard=true in "newest-cni-20220725134004-44543"
	W0725 13:41:17.426475   62641 addons.go:162] addon dashboard should already be in state true
	W0725 13:41:17.426482   62641 addons.go:162] addon storage-provisioner should already be in state true
	I0725 13:41:17.367832   62641 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220725134004-44543"
	I0725 13:41:17.368425   62641 cli_runner.go:164] Run: docker container inspect newest-cni-20220725134004-44543 --format={{.State.Status}}
	I0725 13:41:17.426544   62641 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220725134004-44543"
	I0725 13:41:17.426562   62641 host.go:66] Checking if "newest-cni-20220725134004-44543" exists ...
	W0725 13:41:17.426582   62641 addons.go:162] addon metrics-server should already be in state true
	I0725 13:41:17.426595   62641 host.go:66] Checking if "newest-cni-20220725134004-44543" exists ...
	I0725 13:41:17.426646   62641 host.go:66] Checking if "newest-cni-20220725134004-44543" exists ...
	I0725 13:41:17.430002   62641 cli_runner.go:164] Run: docker container inspect newest-cni-20220725134004-44543 --format={{.State.Status}}
	I0725 13:41:17.430013   62641 cli_runner.go:164] Run: docker container inspect newest-cni-20220725134004-44543 --format={{.State.Status}}
	I0725 13:41:17.430127   62641 cli_runner.go:164] Run: docker container inspect newest-cni-20220725134004-44543 --format={{.State.Status}}
	I0725 13:41:17.441775   62641 start.go:789] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0725 13:41:17.455220   62641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220725134004-44543
	I0725 13:41:17.619130   62641 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0725 13:41:17.564243   62641 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220725134004-44543"
	I0725 13:41:17.578481   62641 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 13:41:17.598496   62641 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0725 13:41:17.600117   62641 api_server.go:51] waiting for apiserver process to appear ...
	W0725 13:41:17.619193   62641 addons.go:162] addon default-storageclass should already be in state true
	I0725 13:41:17.640288   62641 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0725 13:41:17.640305   62641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0725 13:41:17.619216   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:41:17.640332   62641 host.go:66] Checking if "newest-cni-20220725134004-44543" exists ...
	I0725 13:41:17.677208   62641 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0725 13:41:17.640379   62641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725134004-44543
	I0725 13:41:17.640398   62641 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 13:41:17.640698   62641 cli_runner.go:164] Run: docker container inspect newest-cni-20220725134004-44543 --format={{.State.Status}}
	I0725 13:41:17.654154   62641 api_server.go:71] duration metric: took 310.477127ms to wait for apiserver process to appear ...
	I0725 13:41:17.714129   62641 api_server.go:87] waiting for apiserver healthz status ...
	I0725 13:41:17.714136   62641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 13:41:17.714143   62641 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61134/healthz ...
	I0725 13:41:17.714177   62641 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0725 13:41:17.714187   62641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0725 13:41:17.714219   62641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725134004-44543
	I0725 13:41:17.714247   62641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725134004-44543
	I0725 13:41:17.726411   62641 api_server.go:266] https://127.0.0.1:61134/healthz returned 200:
	ok
	I0725 13:41:17.729015   62641 api_server.go:140] control plane version: v1.24.2
	I0725 13:41:17.729043   62641 api_server.go:130] duration metric: took 14.904203ms to wait for apiserver health ...
	I0725 13:41:17.729051   62641 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 13:41:17.737689   62641 system_pods.go:59] 9 kube-system pods found
	I0725 13:41:17.737718   62641 system_pods.go:61] "coredns-6d4b75cb6d-hbn6k" [fc055ddb-e646-4d07-b88b-583f467837dd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 13:41:17.737728   62641 system_pods.go:61] "coredns-6d4b75cb6d-x9jqp" [fa2d6b5c-bb0c-4fc9-9443-d933aed66032] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 13:41:17.737735   62641 system_pods.go:61] "etcd-newest-cni-20220725134004-44543" [fadf380e-3d9e-49f1-b1c3-2802743dcb63] Running
	I0725 13:41:17.737742   62641 system_pods.go:61] "kube-apiserver-newest-cni-20220725134004-44543" [4c112752-e4f6-477a-9489-ff1a7b1a92e3] Running
	I0725 13:41:17.737755   62641 system_pods.go:61] "kube-controller-manager-newest-cni-20220725134004-44543" [8a1cdd56-6f7e-4d69-ae6e-8260c02c5acc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0725 13:41:17.737761   62641 system_pods.go:61] "kube-proxy-mm6ph" [820f3eb0-aba6-415b-a884-b67741ece355] Running
	I0725 13:41:17.737776   62641 system_pods.go:61] "kube-scheduler-newest-cni-20220725134004-44543" [06d1a7e9-4b67-47fa-b62c-eebb4e5067fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 13:41:17.737788   62641 system_pods.go:61] "metrics-server-5c6f97fb75-92v57" [32a2b35f-ed5d-4e41-9f58-4b5e119d8bb3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 13:41:17.737799   62641 system_pods.go:61] "storage-provisioner" [d910dec0-c09f-4225-810a-5a5d773f923b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0725 13:41:17.737808   62641 system_pods.go:74] duration metric: took 8.751882ms to wait for pod list to return data ...
	I0725 13:41:17.737815   62641 default_sa.go:34] waiting for default service account to be created ...
	I0725 13:41:17.741962   62641 default_sa.go:45] found service account: "default"
	I0725 13:41:17.741978   62641 default_sa.go:55] duration metric: took 4.138262ms for default service account to be created ...
	I0725 13:41:17.741988   62641 kubeadm.go:572] duration metric: took 398.309938ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0725 13:41:17.742005   62641 node_conditions.go:102] verifying NodePressure condition ...
	I0725 13:41:17.747495   62641 node_conditions.go:122] node storage ephemeral capacity is 107077304Ki
	I0725 13:41:17.747515   62641 node_conditions.go:123] node cpu capacity is 6
	I0725 13:41:17.747525   62641 node_conditions.go:105] duration metric: took 5.502342ms to run NodePressure ...
	I0725 13:41:17.747537   62641 start.go:216] waiting for startup goroutines ...
	I0725 13:41:17.830346   62641 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 13:41:17.830368   62641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 13:41:17.830438   62641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725134004-44543
	I0725 13:41:17.845326   62641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61130 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/newest-cni-20220725134004-44543/id_rsa Username:docker}
	I0725 13:41:17.847941   62641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61130 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/newest-cni-20220725134004-44543/id_rsa Username:docker}
	I0725 13:41:17.850143   62641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61130 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/newest-cni-20220725134004-44543/id_rsa Username:docker}
	I0725 13:41:17.916591   62641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61130 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/newest-cni-20220725134004-44543/id_rsa Username:docker}
	I0725 13:41:18.017591   62641 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0725 13:41:18.017604   62641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0725 13:41:18.021389   62641 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0725 13:41:18.021403   62641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0725 13:41:18.023473   62641 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 13:41:18.040806   62641 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0725 13:41:18.040824   62641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0725 13:41:18.105407   62641 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0725 13:41:18.105424   62641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0725 13:41:18.113954   62641 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 13:41:18.113970   62641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0725 13:41:18.120629   62641 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 13:41:18.126078   62641 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0725 13:41:18.126090   62641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0725 13:41:18.145916   62641 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0725 13:41:18.145948   62641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0725 13:41:18.207466   62641 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 13:41:18.235227   62641 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0725 13:41:18.235244   62641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0725 13:41:18.338414   62641 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0725 13:41:18.338432   62641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0725 13:41:18.433557   62641 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0725 13:41:18.433577   62641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0725 13:41:18.453890   62641 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0725 13:41:18.453908   62641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0725 13:41:18.515844   62641 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0725 13:41:18.515863   62641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0725 13:41:18.541656   62641 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0725 13:41:19.239622   62641 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.216077847s)
	I0725 13:41:19.239671   62641 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.118958728s)
	I0725 13:41:19.239723   62641 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.032203727s)
	I0725 13:41:19.239744   62641 addons.go:383] Verifying addon metrics-server=true in "newest-cni-20220725134004-44543"
	I0725 13:41:19.449476   62641 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0725 13:41:19.491637   62641 addons.go:414] enableAddons completed in 2.147902561s
	I0725 13:41:19.530210   62641 start.go:506] kubectl: 1.24.1, cluster: 1.24.2 (minor skew: 0)
	I0725 13:41:19.558148   62641 out.go:177] * Done! kubectl is now configured to use "newest-cni-20220725134004-44543" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Mon 2022-07-25 20:41:02 UTC, end at Mon 2022-07-25 20:41:57 UTC. --
	Jul 25 20:41:05 newest-cni-20220725134004-44543 dockerd[587]: time="2022-07-25T20:41:05.663031894Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jul 25 20:41:05 newest-cni-20220725134004-44543 dockerd[587]: time="2022-07-25T20:41:05.663064942Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jul 25 20:41:05 newest-cni-20220725134004-44543 dockerd[587]: time="2022-07-25T20:41:05.663081122Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jul 25 20:41:05 newest-cni-20220725134004-44543 dockerd[587]: time="2022-07-25T20:41:05.663093655Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jul 25 20:41:05 newest-cni-20220725134004-44543 dockerd[587]: time="2022-07-25T20:41:05.664245937Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jul 25 20:41:05 newest-cni-20220725134004-44543 dockerd[587]: time="2022-07-25T20:41:05.664276196Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jul 25 20:41:05 newest-cni-20220725134004-44543 dockerd[587]: time="2022-07-25T20:41:05.664288346Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jul 25 20:41:05 newest-cni-20220725134004-44543 dockerd[587]: time="2022-07-25T20:41:05.664294128Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jul 25 20:41:05 newest-cni-20220725134004-44543 dockerd[587]: time="2022-07-25T20:41:05.667605790Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jul 25 20:41:05 newest-cni-20220725134004-44543 dockerd[587]: time="2022-07-25T20:41:05.672251647Z" level=info msg="Loading containers: start."
	Jul 25 20:41:05 newest-cni-20220725134004-44543 dockerd[587]: time="2022-07-25T20:41:05.768110532Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 25 20:41:05 newest-cni-20220725134004-44543 dockerd[587]: time="2022-07-25T20:41:05.802676727Z" level=info msg="Loading containers: done."
	Jul 25 20:41:05 newest-cni-20220725134004-44543 dockerd[587]: time="2022-07-25T20:41:05.817548035Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
	Jul 25 20:41:05 newest-cni-20220725134004-44543 dockerd[587]: time="2022-07-25T20:41:05.817613827Z" level=info msg="Daemon has completed initialization"
	Jul 25 20:41:05 newest-cni-20220725134004-44543 systemd[1]: Started Docker Application Container Engine.
	Jul 25 20:41:05 newest-cni-20220725134004-44543 dockerd[587]: time="2022-07-25T20:41:05.839438444Z" level=info msg="API listen on [::]:2376"
	Jul 25 20:41:05 newest-cni-20220725134004-44543 dockerd[587]: time="2022-07-25T20:41:05.846153229Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 25 20:41:17 newest-cni-20220725134004-44543 dockerd[587]: time="2022-07-25T20:41:17.963646288Z" level=info msg="ignoring event" container=4d5c0cf3432444ef8e23657a853734bdadf8a785189853a6831437d34ac61352 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:41:18 newest-cni-20220725134004-44543 dockerd[587]: time="2022-07-25T20:41:18.528482509Z" level=info msg="ignoring event" container=bfc5fa64427de6a2c1a407c50215720b68c16345284a514ca129856659867cde module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:41:19 newest-cni-20220725134004-44543 dockerd[587]: time="2022-07-25T20:41:19.744048246Z" level=info msg="ignoring event" container=e09a8fe129d532ebfad65f504e1ba5079a85e5955392ecda3a79e215ab7de22f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:41:19 newest-cni-20220725134004-44543 dockerd[587]: time="2022-07-25T20:41:19.744581893Z" level=info msg="ignoring event" container=0ed9e93b90e3f2f5935b4da6a1b7c01b18b29b9ade60bdb01400125bb265771e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:41:20 newest-cni-20220725134004-44543 dockerd[587]: time="2022-07-25T20:41:20.218262876Z" level=info msg="ignoring event" container=fd1f65e140c274a7941fc62acf930b77874286a7ed070a1fba61064d9f1f0e74 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:41:20 newest-cni-20220725134004-44543 dockerd[587]: time="2022-07-25T20:41:20.227067485Z" level=info msg="ignoring event" container=379763923e34a70dcd42778e78a29189f830c36741928c299d7df6c9e6130d62 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:41:21 newest-cni-20220725134004-44543 dockerd[587]: time="2022-07-25T20:41:21.222925529Z" level=info msg="ignoring event" container=6ec6312513ad2671a7c068bf3be4050a9a61db883e4dcb30d92ffae1950d3b24 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:41:21 newest-cni-20220725134004-44543 dockerd[587]: time="2022-07-25T20:41:21.232692239Z" level=info msg="ignoring event" container=4e1be7858fbde211607ca6962fa234d3ccdd02ff185901dcc5333e42bd53b40a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	91cc9473ad1e3       6e38f40d628db       39 seconds ago       Running             storage-provisioner       1                   465ed01bf15c1
	50d6d81d01c10       a634548d10b03       40 seconds ago       Running             kube-proxy                1                   738752b239808
	e08ee5a56041e       d3377ffb7177c       45 seconds ago       Running             kube-apiserver            1                   56b110080611a
	9c5616155d934       34cdf99b1bb3b       45 seconds ago       Running             kube-controller-manager   1                   dc11b4709ec7a
	06a18074b6a6d       5d725196c1f47       45 seconds ago       Running             kube-scheduler            1                   12b42f94fa891
	0e4e695d4faad       aebe758cef4cd       45 seconds ago       Running             etcd                      1                   f16afb1ced95f
	2b1a2f7d82bf9       6e38f40d628db       About a minute ago   Exited              storage-provisioner       0                   226f6c6e00753
	b9f0d14e47ae4       a634548d10b03       About a minute ago   Exited              kube-proxy                0                   c538fc012392a
	433c6e459edd2       5d725196c1f47       About a minute ago   Exited              kube-scheduler            0                   17bf2e8fefb40
	c5c26c8d67b9a       aebe758cef4cd       About a minute ago   Exited              etcd                      0                   b7b19f1a5a2d6
	457e1a17e9813       34cdf99b1bb3b       About a minute ago   Exited              kube-controller-manager   0                   f89737e8ee782
	50acabee2bf57       d3377ffb7177c       About a minute ago   Exited              kube-apiserver            0                   662ac0f86cb71
	
	* 
	* ==> describe nodes <==
	* Name:               newest-cni-20220725134004-44543
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-20220725134004-44543
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a5b59bcfc16aadb787d3d4f0635e06172b98dce6
	                    minikube.k8s.io/name=newest-cni-20220725134004-44543
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_07_25T13_40_32_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 25 Jul 2022 20:40:29 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-20220725134004-44543
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 25 Jul 2022 20:41:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 25 Jul 2022 20:41:55 +0000   Mon, 25 Jul 2022 20:40:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 25 Jul 2022 20:41:55 +0000   Mon, 25 Jul 2022 20:40:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 25 Jul 2022 20:41:55 +0000   Mon, 25 Jul 2022 20:40:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 25 Jul 2022 20:41:55 +0000   Mon, 25 Jul 2022 20:41:55 +0000   KubeletNotReady              PLEG is not healthy: pleg has yet to be successful
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-20220725134004-44543
	Capacity:
	  cpu:                6
	  ephemeral-storage:  107077304Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  107077304Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 855c6c72c86b4657b3d8c3c774fd7e1d
	  System UUID:                e79cffea-4229-4772-8c90-194a65d25819
	  Boot ID:                    f0b8333d-7eba-4eec-b9ed-6046a2426c0d
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.2
	  Kube-Proxy Version:         v1.24.2
	PodCIDR:                      192.168.0.0/24
	PodCIDRs:                     192.168.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-hbn6k                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     72s
	  kube-system                 etcd-newest-cni-20220725134004-44543                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         84s
	  kube-system                 kube-apiserver-newest-cni-20220725134004-44543             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	  kube-system                 kube-controller-manager-newest-cni-20220725134004-44543    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	  kube-system                 kube-proxy-mm6ph                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 kube-scheduler-newest-cni-20220725134004-44543             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	  kube-system                 metrics-server-5c6f97fb75-92v57                            100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         69s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	  kubernetes-dashboard        dashboard-metrics-scraper-dffd48c4c-59xjn                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kubernetes-dashboard        kubernetes-dashboard-5fd5574d9f-dzp59                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 70s   kube-proxy       
	  Normal  Starting                 40s   kube-proxy       
	  Normal  Starting                 85s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  85s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  85s   kubelet          Node newest-cni-20220725134004-44543 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    85s   kubelet          Node newest-cni-20220725134004-44543 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     85s   kubelet          Node newest-cni-20220725134004-44543 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           73s   node-controller  Node newest-cni-20220725134004-44543 event: Registered Node newest-cni-20220725134004-44543 in Controller
	  Normal  RegisteredNode           3s    node-controller  Node newest-cni-20220725134004-44543 event: Registered Node newest-cni-20220725134004-44543 in Controller
	  Normal  Starting                 3s    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2s    kubelet          Node newest-cni-20220725134004-44543 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2s    kubelet          Node newest-cni-20220725134004-44543 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2s    kubelet          Node newest-cni-20220725134004-44543 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2s    kubelet          Node newest-cni-20220725134004-44543 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2s    kubelet          Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [0e4e695d4faa] <==
	* {"level":"info","ts":"2022-07-25T20:41:12.455Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"ea7e25599daad906","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2022-07-25T20:41:12.455Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2022-07-25T20:41:12.455Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2022-07-25T20:41:12.455Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2022-07-25T20:41:12.455Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T20:41:12.455Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T20:41:12.460Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-07-25T20:41:12.460Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-07-25T20:41:12.460Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-07-25T20:41:12.460Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-25T20:41:12.460Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-25T20:41:13.947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2022-07-25T20:41:13.947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2022-07-25T20:41:13.947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-07-25T20:41:13.947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2022-07-25T20:41:13.947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2022-07-25T20:41:13.947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2022-07-25T20:41:13.947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2022-07-25T20:41:13.949Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:newest-cni-20220725134004-44543 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-25T20:41:13.949Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-25T20:41:13.950Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-25T20:41:13.950Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-25T20:41:13.950Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-07-25T20:41:13.951Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-07-25T20:41:13.951Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	
	* 
	* ==> etcd [c5c26c8d67b9] <==
	* {"level":"info","ts":"2022-07-25T20:40:27.171Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-07-25T20:40:27.171Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2022-07-25T20:40:27.171Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2022-07-25T20:40:27.171Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-07-25T20:40:27.171Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2022-07-25T20:40:27.171Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-07-25T20:40:27.172Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:newest-cni-20220725134004-44543 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-25T20:40:27.172Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T20:40:27.172Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-25T20:40:27.173Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-25T20:40:27.174Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2022-07-25T20:40:27.174Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T20:40:27.174Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T20:40:27.174Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T20:40:27.174Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-25T20:40:27.174Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-07-25T20:40:27.178Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-07-25T20:40:49.155Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-07-25T20:40:49.155Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"newest-cni-20220725134004-44543","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	WARNING: 2022/07/25 20:40:49 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/07/25 20:40:49 [core] grpc: addrConn.createTransport failed to connect to {192.168.76.2:2379 192.168.76.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.76.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-07-25T20:40:49.163Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2022-07-25T20:40:49.165Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-25T20:40:49.166Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-25T20:40:49.166Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"newest-cni-20220725134004-44543","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	* 
	* ==> kernel <==
	*  20:41:58 up  1:23,  0 users,  load average: 0.82, 0.91, 1.04
	Linux newest-cni-20220725134004-44543 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [50acabee2bf5] <==
	* W0725 20:40:50.160327       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:40:50.160342       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:40:50.160354       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:40:50.160357       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:40:50.160360       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:40:50.160378       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:40:50.160390       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:40:50.160417       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:40:50.160433       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:40:50.160456       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:40:50.160467       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:40:50.160474       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:40:50.160489       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:40:50.160490       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:40:50.160494       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:40:50.160508       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:40:50.160509       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:40:50.160514       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:40:50.160526       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:40:50.160527       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:40:50.160530       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:40:50.160543       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:40:50.160558       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:40:50.160572       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:40:50.160688       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	* 
	* ==> kube-apiserver [e08ee5a56041] <==
	* I0725 20:41:15.748967       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0725 20:41:15.750324       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0725 20:41:15.750368       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0725 20:41:15.768294       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0725 20:41:16.418938       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0725 20:41:16.649973       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0725 20:41:16.779961       1 handler_proxy.go:102] no RequestInfo found in the context
	E0725 20:41:16.780071       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0725 20:41:16.780216       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0725 20:41:16.780073       1 handler_proxy.go:102] no RequestInfo found in the context
	E0725 20:41:16.780315       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0725 20:41:16.781518       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0725 20:41:17.269182       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0725 20:41:17.276403       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0725 20:41:17.321600       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0725 20:41:17.333709       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0725 20:41:17.338504       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0725 20:41:17.343495       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0725 20:41:19.157450       1 controller.go:611] quota admission added evaluator for: namespaces
	I0725 20:41:19.419211       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.101.59.165]
	I0725 20:41:19.430596       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.98.84.73]
	I0725 20:41:54.415351       1 controller.go:611] quota admission added evaluator for: endpoints
	I0725 20:41:54.713208       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0725 20:41:54.717181       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-controller-manager [457e1a17e981] <==
	* I0725 20:40:44.856194       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0725 20:40:44.856244       1 shared_informer.go:262] Caches are synced for namespace
	I0725 20:40:44.861671       1 shared_informer.go:262] Caches are synced for node
	I0725 20:40:44.861707       1 range_allocator.go:173] Starting range CIDR allocator
	I0725 20:40:44.861711       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0725 20:40:44.861717       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0725 20:40:44.865313       1 range_allocator.go:374] Set node newest-cni-20220725134004-44543 PodCIDR to [192.168.0.0/24]
	I0725 20:40:44.866548       1 shared_informer.go:262] Caches are synced for crt configmap
	I0725 20:40:44.954303       1 shared_informer.go:262] Caches are synced for cronjob
	I0725 20:40:45.011751       1 shared_informer.go:262] Caches are synced for resource quota
	I0725 20:40:45.052712       1 shared_informer.go:262] Caches are synced for HPA
	I0725 20:40:45.056434       1 shared_informer.go:262] Caches are synced for resource quota
	I0725 20:40:45.410573       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-mm6ph"
	I0725 20:40:45.473781       1 shared_informer.go:262] Caches are synced for garbage collector
	I0725 20:40:45.473848       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0725 20:40:45.474215       1 shared_informer.go:262] Caches are synced for garbage collector
	I0725 20:40:45.610684       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0725 20:40:45.675138       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-6d4b75cb6d to 1"
	I0725 20:40:45.858336       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-x9jqp"
	I0725 20:40:45.861415       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-hbn6k"
	I0725 20:40:45.879972       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-6d4b75cb6d-x9jqp"
	I0725 20:40:48.409741       1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-5c6f97fb75 to 1"
	I0725 20:40:48.448284       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-5c6f97fb75-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0725 20:40:48.451657       1 replica_set.go:550] sync "kube-system/metrics-server-5c6f97fb75" failed with pods "metrics-server-5c6f97fb75-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0725 20:40:48.456029       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c6f97fb75-92v57"
	
	* 
	* ==> kube-controller-manager [9c5616155d93] <==
	* I0725 20:41:54.702315       1 shared_informer.go:262] Caches are synced for endpoint
	I0725 20:41:54.703314       1 shared_informer.go:262] Caches are synced for PVC protection
	I0725 20:41:54.705991       1 shared_informer.go:255] Waiting for caches to sync for garbage collector
	I0725 20:41:54.707856       1 shared_informer.go:262] Caches are synced for job
	I0725 20:41:54.708329       1 shared_informer.go:262] Caches are synced for attach detach
	I0725 20:41:54.714047       1 shared_informer.go:262] Caches are synced for node
	I0725 20:41:54.714211       1 range_allocator.go:173] Starting range CIDR allocator
	I0725 20:41:54.714234       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0725 20:41:54.714249       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0725 20:41:54.803508       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-5fd5574d9f to 1"
	I0725 20:41:54.803778       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-dffd48c4c to 1"
	I0725 20:41:54.807936       1 shared_informer.go:262] Caches are synced for namespace
	I0725 20:41:54.808062       1 shared_informer.go:262] Caches are synced for service account
	I0725 20:41:54.809470       1 shared_informer.go:262] Caches are synced for resource quota
	I0725 20:41:54.823530       1 shared_informer.go:262] Caches are synced for HPA
	I0725 20:41:54.824786       1 shared_informer.go:262] Caches are synced for resource quota
	I0725 20:41:54.827404       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0725 20:41:54.842014       1 shared_informer.go:262] Caches are synced for disruption
	I0725 20:41:54.842150       1 disruption.go:371] Sending events to api server.
	I0725 20:41:54.902076       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0725 20:41:54.902417       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5fd5574d9f-dzp59"
	I0725 20:41:54.905995       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-dffd48c4c-59xjn"
	I0725 20:41:55.385220       1 shared_informer.go:262] Caches are synced for garbage collector
	I0725 20:41:55.385274       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0725 20:41:55.385352       1 shared_informer.go:262] Caches are synced for garbage collector
	
	* 
	* ==> kube-proxy [50d6d81d01c1] <==
	* I0725 20:41:17.294317       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0725 20:41:17.294466       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0725 20:41:17.294686       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0725 20:41:17.339904       1 server_others.go:206] "Using iptables Proxier"
	I0725 20:41:17.340007       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0725 20:41:17.340039       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0725 20:41:17.340057       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0725 20:41:17.340179       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0725 20:41:17.340400       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0725 20:41:17.341362       1 server.go:661] "Version info" version="v1.24.2"
	I0725 20:41:17.341455       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 20:41:17.342232       1 config.go:317] "Starting service config controller"
	I0725 20:41:17.342266       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0725 20:41:17.342322       1 config.go:226] "Starting endpoint slice config controller"
	I0725 20:41:17.342327       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0725 20:41:17.342459       1 config.go:444] "Starting node config controller"
	I0725 20:41:17.342537       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0725 20:41:17.443033       1 shared_informer.go:262] Caches are synced for node config
	I0725 20:41:17.443160       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0725 20:41:17.443181       1 shared_informer.go:262] Caches are synced for service config
	
	* 
	* ==> kube-proxy [b9f0d14e47ae] <==
	* I0725 20:40:47.155642       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0725 20:40:47.155714       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0725 20:40:47.155738       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0725 20:40:47.177747       1 server_others.go:206] "Using iptables Proxier"
	I0725 20:40:47.177793       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0725 20:40:47.177803       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0725 20:40:47.177812       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0725 20:40:47.177836       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0725 20:40:47.177979       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0725 20:40:47.178200       1 server.go:661] "Version info" version="v1.24.2"
	I0725 20:40:47.188775       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 20:40:47.189621       1 config.go:317] "Starting service config controller"
	I0725 20:40:47.190511       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0725 20:40:47.190053       1 config.go:226] "Starting endpoint slice config controller"
	I0725 20:40:47.190565       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0725 20:40:47.190317       1 config.go:444] "Starting node config controller"
	I0725 20:40:47.190678       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0725 20:40:47.339601       1 shared_informer.go:262] Caches are synced for node config
	I0725 20:40:47.339768       1 shared_informer.go:262] Caches are synced for service config
	I0725 20:40:47.339854       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [06a18074b6a6] <==
	* W0725 20:41:12.454015       1 feature_gate.go:237] Setting GA feature gate ServerSideApply=true. It will be removed in a future release.
	I0725 20:41:13.333332       1 serving.go:348] Generated self-signed cert in-memory
	W0725 20:41:15.657734       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0725 20:41:15.657799       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0725 20:41:15.657807       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0725 20:41:15.657813       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0725 20:41:15.718402       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.2"
	I0725 20:41:15.719540       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 20:41:15.721374       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0725 20:41:15.721418       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0725 20:41:15.724238       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0725 20:41:15.721445       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0725 20:41:15.824518       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [433c6e459edd] <==
	* E0725 20:40:29.963260       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0725 20:40:29.962955       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0725 20:40:29.963270       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0725 20:40:29.962118       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0725 20:40:29.963293       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0725 20:40:30.792518       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0725 20:40:30.792566       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0725 20:40:30.801233       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0725 20:40:30.801302       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0725 20:40:30.809382       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0725 20:40:30.809453       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0725 20:40:30.828142       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0725 20:40:30.828215       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0725 20:40:30.840920       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0725 20:40:30.840989       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0725 20:40:30.853702       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0725 20:40:30.853772       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0725 20:40:30.899220       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0725 20:40:30.899256       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0725 20:40:30.907205       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0725 20:40:30.907242       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0725 20:40:33.858258       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0725 20:40:49.175633       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0725 20:40:49.176069       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0725 20:40:49.176525       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2022-07-25 20:41:02 UTC, end at Mon 2022-07-25 20:41:59 UTC. --
	Jul 25 20:41:59 newest-cni-20220725134004-44543 kubelet[3883]:         Try `iptables -h' or 'iptables --help' for more information.
	Jul 25 20:41:59 newest-cni-20220725134004-44543 kubelet[3883]:         ]
	Jul 25 20:41:59 newest-cni-20220725134004-44543 kubelet[3883]:  > pod="kube-system/coredns-6d4b75cb6d-hbn6k"
	Jul 25 20:41:59 newest-cni-20220725134004-44543 kubelet[3883]: E0725 20:41:59.165372    3883 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6d4b75cb6d-hbn6k_kube-system(fc055ddb-e646-4d07-b88b-583f467837dd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6d4b75cb6d-hbn6k_kube-system(fc055ddb-e646-4d07-b88b-583f467837dd)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"3ada682fb0b79ca242b4698d805696b02fbe3aa04207e1212aa7f00112ebae3d\\\" network for pod \\\"coredns-6d4b75cb6d-hbn6k\\\": networkPlugin cni failed to set up pod \\\"coredns-6d4b75cb6d-hbn6k_kube-system\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"3ada682fb0b79ca242b4698d805696b02fbe3aa04207e1212aa7f00112ebae3d\\\" network for pod \\\"coredns-6d4b75cb6d-hbn6k\\\": networkPlugin cni failed to teardown pod \\\"coredns-6d4b75cb6d-hbn6k_kub
e-system\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.14 -j CNI-557699be9f50f9879ca3a235 -m comment --comment name: \\\"crio\\\" id: \\\"3ada682fb0b79ca242b4698d805696b02fbe3aa04207e1212aa7f00112ebae3d\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-557699be9f50f9879ca3a235':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kube-system/coredns-6d4b75cb6d-hbn6k" podUID=fc055ddb-e646-4d07-b88b-583f467837dd
	Jul 25 20:41:59 newest-cni-20220725134004-44543 kubelet[3883]: E0725 20:41:59.636020    3883 remote_runtime.go:212] "RunPodSandbox from runtime service failed" err=<
	Jul 25 20:41:59 newest-cni-20220725134004-44543 kubelet[3883]:         rpc error: code = Unknown desc = [failed to set up sandbox container "c2cdc35d21e33e3ae328057568cad22876daa3cf709d841a2415e6a63d2c9fb9" network for pod "kubernetes-dashboard-5fd5574d9f-dzp59": networkPlugin cni failed to set up pod "kubernetes-dashboard-5fd5574d9f-dzp59_kubernetes-dashboard" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "c2cdc35d21e33e3ae328057568cad22876daa3cf709d841a2415e6a63d2c9fb9" network for pod "kubernetes-dashboard-5fd5574d9f-dzp59": networkPlugin cni failed to teardown pod "kubernetes-dashboard-5fd5574d9f-dzp59_kubernetes-dashboard" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.15 -j CNI-f32c9b223948622b6e0a5cc8 -m comment --comment name: "crio" id: "c2cdc35d21e33e3ae328057568cad22876daa3cf709d841a2415e6a63d2c9fb9" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-f32c9b22394
8622b6e0a5cc8':No such file or directory
	Jul 25 20:41:59 newest-cni-20220725134004-44543 kubelet[3883]:         
	Jul 25 20:41:59 newest-cni-20220725134004-44543 kubelet[3883]:         Try `iptables -h' or 'iptables --help' for more information.
	Jul 25 20:41:59 newest-cni-20220725134004-44543 kubelet[3883]:         ]
	Jul 25 20:41:59 newest-cni-20220725134004-44543 kubelet[3883]:  >
	Jul 25 20:41:59 newest-cni-20220725134004-44543 kubelet[3883]: E0725 20:41:59.636143    3883 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err=<
	Jul 25 20:41:59 newest-cni-20220725134004-44543 kubelet[3883]:         rpc error: code = Unknown desc = [failed to set up sandbox container "c2cdc35d21e33e3ae328057568cad22876daa3cf709d841a2415e6a63d2c9fb9" network for pod "kubernetes-dashboard-5fd5574d9f-dzp59": networkPlugin cni failed to set up pod "kubernetes-dashboard-5fd5574d9f-dzp59_kubernetes-dashboard" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "c2cdc35d21e33e3ae328057568cad22876daa3cf709d841a2415e6a63d2c9fb9" network for pod "kubernetes-dashboard-5fd5574d9f-dzp59": networkPlugin cni failed to teardown pod "kubernetes-dashboard-5fd5574d9f-dzp59_kubernetes-dashboard" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.15 -j CNI-f32c9b223948622b6e0a5cc8 -m comment --comment name: "crio" id: "c2cdc35d21e33e3ae328057568cad22876daa3cf709d841a2415e6a63d2c9fb9" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-f32c9b22394
8622b6e0a5cc8':No such file or directory
	Jul 25 20:41:59 newest-cni-20220725134004-44543 kubelet[3883]:         
	Jul 25 20:41:59 newest-cni-20220725134004-44543 kubelet[3883]:         Try `iptables -h' or 'iptables --help' for more information.
	Jul 25 20:41:59 newest-cni-20220725134004-44543 kubelet[3883]:         ]
	Jul 25 20:41:59 newest-cni-20220725134004-44543 kubelet[3883]:  > pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-dzp59"
	Jul 25 20:41:59 newest-cni-20220725134004-44543 kubelet[3883]: E0725 20:41:59.636169    3883 kuberuntime_manager.go:815] "CreatePodSandbox for pod failed" err=<
	Jul 25 20:41:59 newest-cni-20220725134004-44543 kubelet[3883]:         rpc error: code = Unknown desc = [failed to set up sandbox container "c2cdc35d21e33e3ae328057568cad22876daa3cf709d841a2415e6a63d2c9fb9" network for pod "kubernetes-dashboard-5fd5574d9f-dzp59": networkPlugin cni failed to set up pod "kubernetes-dashboard-5fd5574d9f-dzp59_kubernetes-dashboard" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "c2cdc35d21e33e3ae328057568cad22876daa3cf709d841a2415e6a63d2c9fb9" network for pod "kubernetes-dashboard-5fd5574d9f-dzp59": networkPlugin cni failed to teardown pod "kubernetes-dashboard-5fd5574d9f-dzp59_kubernetes-dashboard" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.15 -j CNI-f32c9b223948622b6e0a5cc8 -m comment --comment name: "crio" id: "c2cdc35d21e33e3ae328057568cad22876daa3cf709d841a2415e6a63d2c9fb9" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-f32c9b22394
8622b6e0a5cc8':No such file or directory
	Jul 25 20:41:59 newest-cni-20220725134004-44543 kubelet[3883]:         
	Jul 25 20:41:59 newest-cni-20220725134004-44543 kubelet[3883]:         Try `iptables -h' or 'iptables --help' for more information.
	Jul 25 20:41:59 newest-cni-20220725134004-44543 kubelet[3883]:         ]
	Jul 25 20:41:59 newest-cni-20220725134004-44543 kubelet[3883]:  > pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-dzp59"
	Jul 25 20:41:59 newest-cni-20220725134004-44543 kubelet[3883]: E0725 20:41:59.636226    3883 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kubernetes-dashboard-5fd5574d9f-dzp59_kubernetes-dashboard(2ee896c1-bbed-43c1-a2b5-8ba9befcfea2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kubernetes-dashboard-5fd5574d9f-dzp59_kubernetes-dashboard(2ee896c1-bbed-43c1-a2b5-8ba9befcfea2)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"c2cdc35d21e33e3ae328057568cad22876daa3cf709d841a2415e6a63d2c9fb9\\\" network for pod \\\"kubernetes-dashboard-5fd5574d9f-dzp59\\\": networkPlugin cni failed to set up pod \\\"kubernetes-dashboard-5fd5574d9f-dzp59_kubernetes-dashboard\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"c2cdc35d21e33e3ae328057568cad22876daa3cf709d841a2415e6a63d2c9fb9\\\" network for pod \\\"kubernetes-dashboard-5fd
5574d9f-dzp59\\\": networkPlugin cni failed to teardown pod \\\"kubernetes-dashboard-5fd5574d9f-dzp59_kubernetes-dashboard\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.15 -j CNI-f32c9b223948622b6e0a5cc8 -m comment --comment name: \\\"crio\\\" id: \\\"c2cdc35d21e33e3ae328057568cad22876daa3cf709d841a2415e6a63d2c9fb9\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-f32c9b223948622b6e0a5cc8':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-dzp59" podUID=2ee896c1-bbed-43c1-a2b5-8ba9befcfea2
	Jul 25 20:41:59 newest-cni-20220725134004-44543 kubelet[3883]: I0725 20:41:59.752558    3883 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="2a3996f544475221137710f67b13fd86ee490024adcbfbadd4c9cfa1cb3042ab"
	Jul 25 20:41:59 newest-cni-20220725134004-44543 kubelet[3883]: I0725 20:41:59.756654    3883 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="3ada682fb0b79ca242b4698d805696b02fbe3aa04207e1212aa7f00112ebae3d"
	
	* 
	* ==> storage-provisioner [2b1a2f7d82bf] <==
	* I0725 20:40:48.094884       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0725 20:40:48.102698       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0725 20:40:48.102765       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0725 20:40:48.111684       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0725 20:40:48.111831       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bb8b38a7-2150-4f24-ae19-896a5c3a4dfe", APIVersion:"v1", ResourceVersion:"378", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' newest-cni-20220725134004-44543_3e78199b-165f-4cfa-90c3-6e8e5ffdd829 became leader
	I0725 20:40:48.112054       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_newest-cni-20220725134004-44543_3e78199b-165f-4cfa-90c3-6e8e5ffdd829!
	I0725 20:40:48.212259       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_newest-cni-20220725134004-44543_3e78199b-165f-4cfa-90c3-6e8e5ffdd829!
	
	* 
	* ==> storage-provisioner [91cc9473ad1e] <==
	* I0725 20:41:18.339498       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0725 20:41:18.352711       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0725 20:41:18.352752       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0725 20:41:54.417746       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0725 20:41:54.417889       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_newest-cni-20220725134004-44543_6dbe4956-426a-42da-a748-5c0138a9bac3!
	I0725 20:41:54.417873       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bb8b38a7-2150-4f24-ae19-896a5c3a4dfe", APIVersion:"v1", ResourceVersion:"450", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' newest-cni-20220725134004-44543_6dbe4956-426a-42da-a748-5c0138a9bac3 became leader
	I0725 20:41:54.619834       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_newest-cni-20220725134004-44543_6dbe4956-426a-42da-a748-5c0138a9bac3!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220725134004-44543 -n newest-cni-20220725134004-44543
helpers_test.go:261: (dbg) Run:  kubectl --context newest-cni-20220725134004-44543 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
helpers_test.go:261: (dbg) Done: kubectl --context newest-cni-20220725134004-44543 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: (2.414617477s)
helpers_test.go:270: non-running pods: coredns-6d4b75cb6d-hbn6k metrics-server-5c6f97fb75-92v57 dashboard-metrics-scraper-dffd48c4c-59xjn kubernetes-dashboard-5fd5574d9f-dzp59
helpers_test.go:272: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context newest-cni-20220725134004-44543 describe pod coredns-6d4b75cb6d-hbn6k metrics-server-5c6f97fb75-92v57 dashboard-metrics-scraper-dffd48c4c-59xjn kubernetes-dashboard-5fd5574d9f-dzp59
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context newest-cni-20220725134004-44543 describe pod coredns-6d4b75cb6d-hbn6k metrics-server-5c6f97fb75-92v57 dashboard-metrics-scraper-dffd48c4c-59xjn kubernetes-dashboard-5fd5574d9f-dzp59: exit status 1 (201.883564ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-6d4b75cb6d-hbn6k" not found
	Error from server (NotFound): pods "metrics-server-5c6f97fb75-92v57" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-dffd48c4c-59xjn" not found
	Error from server (NotFound): pods "kubernetes-dashboard-5fd5574d9f-dzp59" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context newest-cni-20220725134004-44543 describe pod coredns-6d4b75cb6d-hbn6k metrics-server-5c6f97fb75-92v57 dashboard-metrics-scraper-dffd48c4c-59xjn kubernetes-dashboard-5fd5574d9f-dzp59: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220725134004-44543
helpers_test.go:235: (dbg) docker inspect newest-cni-20220725134004-44543:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1fceac213dcb3ffedcd422e31e6e345fa503ac6e174ed2b3243f6a222d7cbd9b",
	        "Created": "2022-07-25T20:40:11.821869037Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 318478,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-25T20:41:02.844266Z",
	            "FinishedAt": "2022-07-25T20:41:00.916699019Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/1fceac213dcb3ffedcd422e31e6e345fa503ac6e174ed2b3243f6a222d7cbd9b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1fceac213dcb3ffedcd422e31e6e345fa503ac6e174ed2b3243f6a222d7cbd9b/hostname",
	        "HostsPath": "/var/lib/docker/containers/1fceac213dcb3ffedcd422e31e6e345fa503ac6e174ed2b3243f6a222d7cbd9b/hosts",
	        "LogPath": "/var/lib/docker/containers/1fceac213dcb3ffedcd422e31e6e345fa503ac6e174ed2b3243f6a222d7cbd9b/1fceac213dcb3ffedcd422e31e6e345fa503ac6e174ed2b3243f6a222d7cbd9b-json.log",
	        "Name": "/newest-cni-20220725134004-44543",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-20220725134004-44543:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-20220725134004-44543",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/915ce4de5b10b3405706bde7e2072562370a2d01906870817917cbe90933b10b-init/diff:/var/lib/docker/overlay2/d54b85ebfe416686b2558e2807bd5eef62d8c3964c35bbc22b4965093a7c57f4/diff:/var/lib/docker/overlay2/5cb3ff6e7921802d23fd6cba2490df2c36857ad3d97eedcf6e537d1527a37f60/diff:/var/lib/docker/overlay2/e9074b1b2294cc5a00d3237646a71c4d30d89748eadbae26dd532d6a475c9851/diff:/var/lib/docker/overlay2/df8929c4859ecf2c8807040b0174a3f0dfa8da01532ea961cd7f4f2f0ccfbfc2/diff:/var/lib/docker/overlay2/177329168f9a1043ce94a2b7b32958779e8f685e85abf8e4c335d03d23295b26/diff:/var/lib/docker/overlay2/9eef19133d2d5e0111523234481658c1f73d7552db59842ad9ca687debda8fb3/diff:/var/lib/docker/overlay2/5c3a4cdf915d1e65e3003feefb9b9e649b432658e093e04971d5cb79e610dba9/diff:/var/lib/docker/overlay2/d1aeff9fecf983db994270b4acf2a2310c790c64d42c58d436aec3238f88b36c/diff:/var/lib/docker/overlay2/580dc19745fb3c1352983857ac5a555954893045702e7867e651e15a2d6d8732/diff:/var/lib/docker/overlay2/6c77b3
2028ad3ef74c2faf89b5080529c0391907ee9c1d7bbc00c25a5340238b/diff:/var/lib/docker/overlay2/e9e20374d05015c7a4f19eb8e14e663122cd2aef66d761bedfdef138551230b8/diff:/var/lib/docker/overlay2/cd6d64933d189089a7aaf08d7c7db50b89cc981d03b8272e4fdedb65495c3e4c/diff:/var/lib/docker/overlay2/768244a14ae888bb2a08747182035c24bd2a6a7a67f1ddae7b4e60efccb01339/diff:/var/lib/docker/overlay2/f90a95060678580a1a90ee9a563996249f7606bffcd06b680ef15234b56ab4a6/diff:/var/lib/docker/overlay2/49310159b9be5ac9bfa1dca18d816535ceb12e228963adafcf71f9d7d52d95ec/diff:/var/lib/docker/overlay2/cef5c40c08cddf01fe5ea252f4a28586c0bd1296ddc38fc37fc7e34af83c3c30/diff:/var/lib/docker/overlay2/b31bdcb9b1a69c5230400a4e1f5650c1cc8f9e27e673ff43708f9a450106733e/diff:/var/lib/docker/overlay2/899516301b3ffc21aee7cb9984f97d944fe5af0f2cd0bc5a5895bf187e4bff72/diff:/var/lib/docker/overlay2/f1bd0d2fc2ce458c0b6364dde56610c3d26bf8dc2cca2ae4e34a46f173f44595/diff:/var/lib/docker/overlay2/9f0da14f9c19d5a719554996b34f3ef80a0378921181aaf89ca41339ccc5bee3/diff:/var/lib/d
ocker/overlay2/4d6e8c40f7c65cbcfa827b62273eb37d2a0bb0407ee161b80523f11c157a52d4/diff:/var/lib/docker/overlay2/434355bebe182fb5c97b07ad8cf7a59b5474f8bc71379ff4f9690ffe31837e63/diff:/var/lib/docker/overlay2/128fb199261eb4fea9e79111861223ddb0d84438454cb7c0b9a61cc73d0796c3/diff:/var/lib/docker/overlay2/11c9cb7e5f15ac43115cb739da1135a53e08dc94ee1da27441e1d6c79eb73e62/diff:/var/lib/docker/overlay2/5e6bf225209b025da3db5a2d8862c3a0ee64417fa809d026e010644fc6619b48/diff:/var/lib/docker/overlay2/265a2d12f86abb8a607a06ec6f225186dfe622f7e23cbbf146a2ceffc7bc9bdc/diff:/var/lib/docker/overlay2/e9140b8aea381db0faf833cd2d12ff40267febe63f7c6bc2fa27384dbadf89bc/diff:/var/lib/docker/overlay2/07d7d0ab38cb08c95279e0706a65a6a4aeff8b5e72d559f8bc3dfcc0895654d6/diff:/var/lib/docker/overlay2/3a3d8346fb066863283ad39cf7f43615d7a1e99dc12aa7e8892d85d1c550bc63/diff:/var/lib/docker/overlay2/e5317b212815c832ebd2451ddd6e1f6922f350c5c1651ccfef7c5a8f34b7c1e3/diff:/var/lib/docker/overlay2/3bd7d98f0f0bf7ad2e29344a6ab4ca3211616e390da35077e76565c2074
df852/diff:/var/lib/docker/overlay2/ff63d2045cce1bb51b201079bba9d2b2a666b7331699b9407801b2fc00792bb3/diff:/var/lib/docker/overlay2/545bdf3f77f20e3db68875eda0a8829facb605119a630956ab79323688a3562a/diff:/var/lib/docker/overlay2/8f9b40124ec5ada59184358938542e028ca1e234ef1f6bce1816becf6ec8354e/diff:/var/lib/docker/overlay2/71b919cd2ece4207294cb577adab54fdba87cd9f7fdca1a53d6f376f87f11151/diff:/var/lib/docker/overlay2/b2d4a34fe28bd395d9b4ba0120173fc9df90c36a8a60a309ec088eca8930768c/diff:/var/lib/docker/overlay2/e986f180cbc34391c1ea6665283859b4c3c3061156ea1ff7196ffd869741e591/diff:/var/lib/docker/overlay2/cb98df203d42ea558f11fe84300b687cb65e6f3401b377a4466c9c8ef276ec59/diff:/var/lib/docker/overlay2/6371f9c0528fb1fb4a17bfcf3a34877fdd6d43dca1b5f06aff998d43d2010c46/diff:/var/lib/docker/overlay2/79ceef8e89c4094715c05bafb8c1427d09fb4a99f71f1a6186299303af352c98/diff:/var/lib/docker/overlay2/e034e76eeaff4e68d5158ffe54b55d122124ad82998be242fa08bec3010a0e64/diff",
	                "MergedDir": "/var/lib/docker/overlay2/915ce4de5b10b3405706bde7e2072562370a2d01906870817917cbe90933b10b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/915ce4de5b10b3405706bde7e2072562370a2d01906870817917cbe90933b10b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/915ce4de5b10b3405706bde7e2072562370a2d01906870817917cbe90933b10b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-20220725134004-44543",
	                "Source": "/var/lib/docker/volumes/newest-cni-20220725134004-44543/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-20220725134004-44543",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-20220725134004-44543",
	                "name.minikube.sigs.k8s.io": "newest-cni-20220725134004-44543",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "481db35061c822c7c75df96b8c600d7e081d8c5b2f26b9ba4e5d2e5a885f6c8f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "61130"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "61131"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "61132"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "61133"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "61134"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/481db35061c8",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-20220725134004-44543": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "1fceac213dcb",
	                        "newest-cni-20220725134004-44543"
	                    ],
	                    "NetworkID": "b58b042d3e0330688fdf5ac0e347631e9444253fe03e9422e5a8023128b5d083",
	                    "EndpointID": "871f98eaca7ba77da270a748d3a0404ce10fbf3a7381db692d2ff0f9b73c2ea6",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220725134004-44543 -n newest-cni-20220725134004-44543
helpers_test.go:244: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p newest-cni-20220725134004-44543 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p newest-cni-20220725134004-44543 logs -n 25: (5.279564298s)
helpers_test.go:252: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                         | embed-certs-20220725132539-44543                | jenkins | v1.26.0 | 25 Jul 22 13:26 PDT | 25 Jul 22 13:31 PDT |
	|         | embed-certs-20220725132539-44543                           |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                     |                     |
	|         | --wait=true --embed-certs                                  |                                                 |         |         |                     |                     |
	|         | --driver=docker                                            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | embed-certs-20220725132539-44543                | jenkins | v1.26.0 | 25 Jul 22 13:32 PDT | 25 Jul 22 13:32 PDT |
	|         | embed-certs-20220725132539-44543                           |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | embed-certs-20220725132539-44543                | jenkins | v1.26.0 | 25 Jul 22 13:32 PDT | 25 Jul 22 13:32 PDT |
	|         | embed-certs-20220725132539-44543                           |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | embed-certs-20220725132539-44543                | jenkins | v1.26.0 | 25 Jul 22 13:32 PDT | 25 Jul 22 13:32 PDT |
	|         | embed-certs-20220725132539-44543                           |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | embed-certs-20220725132539-44543                | jenkins | v1.26.0 | 25 Jul 22 13:32 PDT | 25 Jul 22 13:32 PDT |
	|         | embed-certs-20220725132539-44543                           |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | embed-certs-20220725132539-44543                | jenkins | v1.26.0 | 25 Jul 22 13:32 PDT | 25 Jul 22 13:32 PDT |
	|         | embed-certs-20220725132539-44543                           |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | disable-driver-mounts-20220725133257-44543      | jenkins | v1.26.0 | 25 Jul 22 13:32 PDT | 25 Jul 22 13:32 PDT |
	|         | disable-driver-mounts-20220725133257-44543                 |                                                 |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:32 PDT | 25 Jul 22 13:33 PDT |
	|         | default-k8s-different-port-20220725133258-44543            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:33 PDT | 25 Jul 22 13:33 PDT |
	|         | default-k8s-different-port-20220725133258-44543            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:33 PDT | 25 Jul 22 13:34 PDT |
	|         | default-k8s-different-port-20220725133258-44543            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:34 PDT | 25 Jul 22 13:34 PDT |
	|         | default-k8s-different-port-20220725133258-44543            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:34 PDT | 25 Jul 22 13:39 PDT |
	|         | default-k8s-different-port-20220725133258-44543            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:39 PDT | 25 Jul 22 13:39 PDT |
	|         | default-k8s-different-port-20220725133258-44543            |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:39 PDT | 25 Jul 22 13:39 PDT |
	|         | default-k8s-different-port-20220725133258-44543            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:39 PDT | 25 Jul 22 13:39 PDT |
	|         | default-k8s-different-port-20220725133258-44543            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:40 PDT | 25 Jul 22 13:40 PDT |
	|         | default-k8s-different-port-20220725133258-44543            |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220725133258-44543 | jenkins | v1.26.0 | 25 Jul 22 13:40 PDT | 25 Jul 22 13:40 PDT |
	|         | default-k8s-different-port-20220725133258-44543            |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220725134004-44543 --memory=2200           | newest-cni-20220725134004-44543                 | jenkins | v1.26.0 | 25 Jul 22 13:40 PDT | 25 Jul 22 13:40 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.24.2              |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220725134004-44543                 | jenkins | v1.26.0 | 25 Jul 22 13:40 PDT | 25 Jul 22 13:40 PDT |
	|         | newest-cni-20220725134004-44543                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | newest-cni-20220725134004-44543                 | jenkins | v1.26.0 | 25 Jul 22 13:40 PDT | 25 Jul 22 13:41 PDT |
	|         | newest-cni-20220725134004-44543                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220725134004-44543                 | jenkins | v1.26.0 | 25 Jul 22 13:41 PDT | 25 Jul 22 13:41 PDT |
	|         | newest-cni-20220725134004-44543                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220725134004-44543 --memory=2200           | newest-cni-20220725134004-44543                 | jenkins | v1.26.0 | 25 Jul 22 13:41 PDT | 25 Jul 22 13:41 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.24.2              |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220725134004-44543                 | jenkins | v1.26.0 | 25 Jul 22 13:41 PDT | 25 Jul 22 13:41 PDT |
	|         | newest-cni-20220725134004-44543                            |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | newest-cni-20220725134004-44543                 | jenkins | v1.26.0 | 25 Jul 22 13:41 PDT | 25 Jul 22 13:41 PDT |
	|         | newest-cni-20220725134004-44543                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | newest-cni-20220725134004-44543                 | jenkins | v1.26.0 | 25 Jul 22 13:41 PDT | 25 Jul 22 13:41 PDT |
	|         | newest-cni-20220725134004-44543                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/25 13:41:01
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 13:41:01.604769   62641 out.go:296] Setting OutFile to fd 1 ...
	I0725 13:41:01.604901   62641 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 13:41:01.604906   62641 out.go:309] Setting ErrFile to fd 2...
	I0725 13:41:01.604910   62641 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 13:41:01.605019   62641 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/bin
	I0725 13:41:01.605467   62641 out.go:303] Setting JSON to false
	I0725 13:41:01.620475   62641 start.go:115] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":16833,"bootTime":1658764828,"procs":356,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0725 13:41:01.620568   62641 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0725 13:41:01.642382   62641 out.go:177] * [newest-cni-20220725134004-44543] minikube v1.26.0 on Darwin 12.4
	I0725 13:41:01.664186   62641 notify.go:193] Checking for updates...
	I0725 13:41:01.685918   62641 out.go:177]   - MINIKUBE_LOCATION=14555
	I0725 13:41:01.707967   62641 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	I0725 13:41:01.729065   62641 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 13:41:01.751081   62641 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 13:41:01.772267   62641 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube
	I0725 13:41:01.794758   62641 config.go:178] Loaded profile config "newest-cni-20220725134004-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0725 13:41:01.795416   62641 driver.go:365] Setting default libvirt URI to qemu:///system
	I0725 13:41:01.864839   62641 docker.go:137] docker version: linux-20.10.17
	I0725 13:41:01.864995   62641 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 13:41:01.998821   62641 info.go:265] docker info: {ID:DEPZ:X5EQ:JUVI:7TAY:IXPP:M3QT:KGJK:NK3N:YSWM:HAHS:YLUF:RBE6 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 20:41:01.926956783 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 13:41:02.020502   62641 out.go:177] * Using the docker driver based on existing profile
	I0725 13:41:02.041400   62641 start.go:284] selected driver: docker
	I0725 13:41:02.041426   62641 start.go:808] validating driver "docker" against &{Name:newest-cni-20220725134004-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:newest-cni-20220725134004-44543 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:tru
e extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 13:41:02.041590   62641 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 13:41:02.044316   62641 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 13:41:02.175804   62641 info.go:265] docker info: {ID:DEPZ:X5EQ:JUVI:7TAY:IXPP:M3QT:KGJK:NK3N:YSWM:HAHS:YLUF:RBE6 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 20:41:02.106306455 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 13:41:02.175969   62641 start_flags.go:872] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0725 13:41:02.175988   62641 cni.go:95] Creating CNI manager for ""
	I0725 13:41:02.175998   62641 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 13:41:02.176021   62641 start_flags.go:310] config:
	{Name:newest-cni-20220725134004-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:newest-cni-20220725134004-44543 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 13:41:02.219416   62641 out.go:177] * Starting control plane node newest-cni-20220725134004-44543 in cluster newest-cni-20220725134004-44543
	I0725 13:41:02.240914   62641 cache.go:120] Beginning downloading kic base image for docker with docker
	I0725 13:41:02.263062   62641 out.go:177] * Pulling base image ...
	I0725 13:41:02.305845   62641 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0725 13:41:02.305897   62641 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0725 13:41:02.305926   62641 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4
	I0725 13:41:02.305951   62641 cache.go:57] Caching tarball of preloaded images
	I0725 13:41:02.306134   62641 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0725 13:41:02.306156   62641 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.2 on docker
	I0725 13:41:02.307199   62641 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/newest-cni-20220725134004-44543/config.json ...
	I0725 13:41:02.370228   62641 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0725 13:41:02.370244   62641 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0725 13:41:02.370255   62641 cache.go:208] Successfully downloaded all kic artifacts
	I0725 13:41:02.370307   62641 start.go:370] acquiring machines lock for newest-cni-20220725134004-44543: {Name:mk938127dcd35e39de5792da4cde1f6031a6baad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 13:41:02.370384   62641 start.go:374] acquired machines lock for "newest-cni-20220725134004-44543" in 57.562µs
	I0725 13:41:02.370403   62641 start.go:95] Skipping create...Using existing machine configuration
	I0725 13:41:02.370414   62641 fix.go:55] fixHost starting: 
	I0725 13:41:02.370643   62641 cli_runner.go:164] Run: docker container inspect newest-cni-20220725134004-44543 --format={{.State.Status}}
	I0725 13:41:02.437623   62641 fix.go:103] recreateIfNeeded on newest-cni-20220725134004-44543: state=Stopped err=<nil>
	W0725 13:41:02.437661   62641 fix.go:129] unexpected machine state, will restart: <nil>
	I0725 13:41:02.481261   62641 out.go:177] * Restarting existing docker container for "newest-cni-20220725134004-44543" ...
	I0725 13:41:02.502954   62641 cli_runner.go:164] Run: docker start newest-cni-20220725134004-44543
	I0725 13:41:02.847905   62641 cli_runner.go:164] Run: docker container inspect newest-cni-20220725134004-44543 --format={{.State.Status}}
	I0725 13:41:02.921451   62641 kic.go:415] container "newest-cni-20220725134004-44543" state is running.
	I0725 13:41:02.922010   62641 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220725134004-44543
	I0725 13:41:02.999381   62641 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/newest-cni-20220725134004-44543/config.json ...
	I0725 13:41:02.999787   62641 machine.go:88] provisioning docker machine ...
	I0725 13:41:02.999811   62641 ubuntu.go:169] provisioning hostname "newest-cni-20220725134004-44543"
	I0725 13:41:02.999890   62641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725134004-44543
	I0725 13:41:03.075991   62641 main.go:134] libmachine: Using SSH client type: native
	I0725 13:41:03.076182   62641 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 61130 <nil> <nil>}
	I0725 13:41:03.076197   62641 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220725134004-44543 && echo "newest-cni-20220725134004-44543" | sudo tee /etc/hostname
	I0725 13:41:03.206919   62641 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-20220725134004-44543
	
	I0725 13:41:03.206994   62641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725134004-44543
	I0725 13:41:03.283133   62641 main.go:134] libmachine: Using SSH client type: native
	I0725 13:41:03.283305   62641 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 61130 <nil> <nil>}
	I0725 13:41:03.283326   62641 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220725134004-44543' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220725134004-44543/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220725134004-44543' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 13:41:03.405138   62641 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 13:41:03.405172   62641 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube}
	I0725 13:41:03.405208   62641 ubuntu.go:177] setting up certificates
	I0725 13:41:03.405220   62641 provision.go:83] configureAuth start
	I0725 13:41:03.405313   62641 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220725134004-44543
	I0725 13:41:03.482288   62641 provision.go:138] copyHostCerts
	I0725 13:41:03.482371   62641 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem, removing ...
	I0725 13:41:03.482380   62641 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem
	I0725 13:41:03.482467   62641 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem (1082 bytes)
	I0725 13:41:03.482700   62641 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem, removing ...
	I0725 13:41:03.482708   62641 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem
	I0725 13:41:03.482765   62641 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem (1123 bytes)
	I0725 13:41:03.482898   62641 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem, removing ...
	I0725 13:41:03.482906   62641 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem
	I0725 13:41:03.482968   62641 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem (1675 bytes)
	I0725 13:41:03.483101   62641 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20220725134004-44543 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220725134004-44543]
	I0725 13:41:03.601079   62641 provision.go:172] copyRemoteCerts
	I0725 13:41:03.601148   62641 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 13:41:03.601194   62641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725134004-44543
	I0725 13:41:03.676833   62641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61130 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/newest-cni-20220725134004-44543/id_rsa Username:docker}
	I0725 13:41:03.765491   62641 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0725 13:41:03.782662   62641 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0725 13:41:03.799217   62641 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0725 13:41:03.815241   62641 provision.go:86] duration metric: configureAuth took 409.996098ms
	I0725 13:41:03.815253   62641 ubuntu.go:193] setting minikube options for container-runtime
	I0725 13:41:03.815429   62641 config.go:178] Loaded profile config "newest-cni-20220725134004-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0725 13:41:03.815493   62641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725134004-44543
	I0725 13:41:03.886548   62641 main.go:134] libmachine: Using SSH client type: native
	I0725 13:41:03.886702   62641 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 61130 <nil> <nil>}
	I0725 13:41:03.886713   62641 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 13:41:04.008823   62641 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0725 13:41:04.008850   62641 ubuntu.go:71] root file system type: overlay
	I0725 13:41:04.008999   62641 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 13:41:04.009069   62641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725134004-44543
	I0725 13:41:04.080213   62641 main.go:134] libmachine: Using SSH client type: native
	I0725 13:41:04.080363   62641 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 61130 <nil> <nil>}
	I0725 13:41:04.080412   62641 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 13:41:04.215381   62641 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 13:41:04.215466   62641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725134004-44543
	I0725 13:41:04.287477   62641 main.go:134] libmachine: Using SSH client type: native
	I0725 13:41:04.287633   62641 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 61130 <nil> <nil>}
	I0725 13:41:04.287646   62641 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 13:41:04.413431   62641 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0725 13:41:04.413445   62641 machine.go:91] provisioned docker machine in 1.413607753s
	I0725 13:41:04.413455   62641 start.go:307] post-start starting for "newest-cni-20220725134004-44543" (driver="docker")
	I0725 13:41:04.413460   62641 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 13:41:04.413520   62641 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 13:41:04.413564   62641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725134004-44543
	I0725 13:41:04.483819   62641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61130 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/newest-cni-20220725134004-44543/id_rsa Username:docker}
	I0725 13:41:04.573866   62641 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 13:41:04.577512   62641 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0725 13:41:04.577532   62641 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0725 13:41:04.577543   62641 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0725 13:41:04.577548   62641 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0725 13:41:04.577559   62641 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/addons for local assets ...
	I0725 13:41:04.577671   62641 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files for local assets ...
	I0725 13:41:04.577813   62641 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem -> 445432.pem in /etc/ssl/certs
	I0725 13:41:04.577999   62641 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 13:41:04.585135   62641 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem --> /etc/ssl/certs/445432.pem (1708 bytes)
	I0725 13:41:04.601851   62641 start.go:310] post-start completed in 188.379013ms
	I0725 13:41:04.601920   62641 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 13:41:04.601965   62641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725134004-44543
	I0725 13:41:04.671831   62641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61130 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/newest-cni-20220725134004-44543/id_rsa Username:docker}
	I0725 13:41:04.762233   62641 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0725 13:41:04.766670   62641 fix.go:57] fixHost completed within 2.396186108s
	I0725 13:41:04.766683   62641 start.go:82] releasing machines lock for "newest-cni-20220725134004-44543", held for 2.396220574s
	I0725 13:41:04.766757   62641 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220725134004-44543
	I0725 13:41:04.837640   62641 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0725 13:41:04.837643   62641 ssh_runner.go:195] Run: systemctl --version
	I0725 13:41:04.837724   62641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725134004-44543
	I0725 13:41:04.837723   62641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725134004-44543
	I0725 13:41:04.913021   62641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61130 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/newest-cni-20220725134004-44543/id_rsa Username:docker}
	I0725 13:41:04.915823   62641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61130 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/newest-cni-20220725134004-44543/id_rsa Username:docker}
	I0725 13:41:04.996139   62641 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0725 13:41:05.213499   62641 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (233 bytes)
	I0725 13:41:05.226262   62641 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 13:41:05.290328   62641 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0725 13:41:05.366103   62641 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 13:41:05.375827   62641 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0725 13:41:05.375888   62641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 13:41:05.385531   62641 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 13:41:05.398027   62641 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 13:41:05.465167   62641 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 13:41:05.534527   62641 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 13:41:05.599370   62641 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 13:41:05.829754   62641 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0725 13:41:05.899999   62641 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 13:41:05.969149   62641 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0725 13:41:05.978813   62641 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0725 13:41:05.978890   62641 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0725 13:41:05.982938   62641 start.go:471] Will wait 60s for crictl version
	I0725 13:41:05.982986   62641 ssh_runner.go:195] Run: sudo crictl version
	I0725 13:41:06.011832   62641 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0725 13:41:06.011903   62641 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 13:41:06.045117   62641 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 13:41:06.102152   62641 out.go:204] * Preparing Kubernetes v1.24.2 on Docker 20.10.17 ...
	I0725 13:41:06.102359   62641 cli_runner.go:164] Run: docker exec -t newest-cni-20220725134004-44543 dig +short host.docker.internal
	I0725 13:41:06.235422   62641 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0725 13:41:06.235509   62641 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0725 13:41:06.239630   62641 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 13:41:06.249184   62641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220725134004-44543
	I0725 13:41:06.343694   62641 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0725 13:41:06.365259   62641 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0725 13:41:06.365400   62641 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 13:41:06.396834   62641 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.2
	k8s.gcr.io/kube-scheduler:v1.24.2
	k8s.gcr.io/kube-proxy:v1.24.2
	k8s.gcr.io/kube-controller-manager:v1.24.2
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0725 13:41:06.396848   62641 docker.go:542] Images already preloaded, skipping extraction
	I0725 13:41:06.396922   62641 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 13:41:06.425279   62641 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.2
	k8s.gcr.io/kube-scheduler:v1.24.2
	k8s.gcr.io/kube-proxy:v1.24.2
	k8s.gcr.io/kube-controller-manager:v1.24.2
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0725 13:41:06.425301   62641 cache_images.go:84] Images are preloaded, skipping loading
	I0725 13:41:06.425374   62641 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0725 13:41:06.500080   62641 cni.go:95] Creating CNI manager for ""
	I0725 13:41:06.500094   62641 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 13:41:06.500110   62641 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0725 13:41:06.500125   62641 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20220725134004-44543 NodeName:newest-cni-20220725134004-44543 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:fal
se] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0725 13:41:06.500228   62641 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "newest-cni-20220725134004-44543"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 13:41:06.500302   62641 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20220725134004-44543 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.2 ClusterName:newest-cni-20220725134004-44543 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0725 13:41:06.500378   62641 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
	I0725 13:41:06.508168   62641 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 13:41:06.508230   62641 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 13:41:06.516145   62641 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (530 bytes)
	I0725 13:41:06.529064   62641 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 13:41:06.542568   62641 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2189 bytes)
	I0725 13:41:06.555691   62641 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0725 13:41:06.559354   62641 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 13:41:06.568796   62641 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/newest-cni-20220725134004-44543 for IP: 192.168.76.2
	I0725 13:41:06.568905   62641 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.key
	I0725 13:41:06.568957   62641 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.key
	I0725 13:41:06.569029   62641 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/newest-cni-20220725134004-44543/client.key
	I0725 13:41:06.569092   62641 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/newest-cni-20220725134004-44543/apiserver.key.31bdca25
	I0725 13:41:06.569142   62641 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/newest-cni-20220725134004-44543/proxy-client.key
	I0725 13:41:06.569349   62641 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543.pem (1338 bytes)
	W0725 13:41:06.569386   62641 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543_empty.pem, impossibly tiny 0 bytes
	I0725 13:41:06.569402   62641 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 13:41:06.569437   62641 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem (1082 bytes)
	I0725 13:41:06.569468   62641 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem (1123 bytes)
	I0725 13:41:06.569497   62641 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem (1675 bytes)
	I0725 13:41:06.569555   62641 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem (1708 bytes)
	I0725 13:41:06.570104   62641 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/newest-cni-20220725134004-44543/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0725 13:41:06.586702   62641 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/newest-cni-20220725134004-44543/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 13:41:06.602950   62641 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/newest-cni-20220725134004-44543/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 13:41:06.619736   62641 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/newest-cni-20220725134004-44543/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 13:41:06.642804   62641 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 13:41:06.658996   62641 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 13:41:06.675332   62641 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 13:41:06.691752   62641 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0725 13:41:06.708295   62641 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/44543.pem --> /usr/share/ca-certificates/44543.pem (1338 bytes)
	I0725 13:41:06.724751   62641 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/445432.pem --> /usr/share/ca-certificates/445432.pem (1708 bytes)
	I0725 13:41:06.741061   62641 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 13:41:06.757627   62641 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 13:41:06.770205   62641 ssh_runner.go:195] Run: openssl version
	I0725 13:41:06.775104   62641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/445432.pem && ln -fs /usr/share/ca-certificates/445432.pem /etc/ssl/certs/445432.pem"
	I0725 13:41:06.782798   62641 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/445432.pem
	I0725 13:41:06.786665   62641 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 25 19:24 /usr/share/ca-certificates/445432.pem
	I0725 13:41:06.786710   62641 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/445432.pem
	I0725 13:41:06.791947   62641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/445432.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 13:41:06.798822   62641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 13:41:06.806371   62641 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 13:41:06.810124   62641 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 25 19:19 /usr/share/ca-certificates/minikubeCA.pem
	I0725 13:41:06.810165   62641 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 13:41:06.815270   62641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 13:41:06.822348   62641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44543.pem && ln -fs /usr/share/ca-certificates/44543.pem /etc/ssl/certs/44543.pem"
	I0725 13:41:06.829900   62641 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44543.pem
	I0725 13:41:06.833621   62641 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 25 19:24 /usr/share/ca-certificates/44543.pem
	I0725 13:41:06.833658   62641 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44543.pem
	I0725 13:41:06.838937   62641 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/44543.pem /etc/ssl/certs/51391683.0"
	I0725 13:41:06.845919   62641 kubeadm.go:395] StartCluster: {Name:newest-cni-20220725134004-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:newest-cni-20220725134004-44543 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubele
t:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 13:41:06.846015   62641 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 13:41:06.874420   62641 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 13:41:06.882063   62641 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0725 13:41:06.882077   62641 kubeadm.go:626] restartCluster start
	I0725 13:41:06.882121   62641 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 13:41:06.888659   62641 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:06.888716   62641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220725134004-44543
	I0725 13:41:06.960144   62641 kubeconfig.go:116] verify returned: extract IP: "newest-cni-20220725134004-44543" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	I0725 13:41:06.960306   62641 kubeconfig.go:127] "newest-cni-20220725134004-44543" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig - will repair!
	I0725 13:41:06.960639   62641 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig: {Name:mk33e723b045d7cbb2c524ed31a7e78a4a0b6415 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 13:41:06.961783   62641 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 13:41:06.969405   62641 api_server.go:165] Checking apiserver status ...
	I0725 13:41:06.969452   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:41:06.977546   62641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:07.177691   62641 api_server.go:165] Checking apiserver status ...
	I0725 13:41:07.177841   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:41:07.188294   62641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:07.379725   62641 api_server.go:165] Checking apiserver status ...
	I0725 13:41:07.379891   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:41:07.390695   62641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:07.578354   62641 api_server.go:165] Checking apiserver status ...
	I0725 13:41:07.578503   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:41:07.589390   62641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:07.778053   62641 api_server.go:165] Checking apiserver status ...
	I0725 13:41:07.778208   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:41:07.788555   62641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:07.978483   62641 api_server.go:165] Checking apiserver status ...
	I0725 13:41:07.978583   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:41:07.987167   62641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:08.179736   62641 api_server.go:165] Checking apiserver status ...
	I0725 13:41:08.179873   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:41:08.190170   62641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:08.379744   62641 api_server.go:165] Checking apiserver status ...
	I0725 13:41:08.379930   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:41:08.390692   62641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:08.579742   62641 api_server.go:165] Checking apiserver status ...
	I0725 13:41:08.579884   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:41:08.590517   62641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:08.778483   62641 api_server.go:165] Checking apiserver status ...
	I0725 13:41:08.778568   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:41:08.787637   62641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:08.978431   62641 api_server.go:165] Checking apiserver status ...
	I0725 13:41:08.978579   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:41:08.988922   62641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:09.179779   62641 api_server.go:165] Checking apiserver status ...
	I0725 13:41:09.179927   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:41:09.191022   62641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:09.378051   62641 api_server.go:165] Checking apiserver status ...
	I0725 13:41:09.378196   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:41:09.388421   62641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:09.577880   62641 api_server.go:165] Checking apiserver status ...
	I0725 13:41:09.578004   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:41:09.588394   62641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:09.777861   62641 api_server.go:165] Checking apiserver status ...
	I0725 13:41:09.778000   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:41:09.788393   62641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:09.979800   62641 api_server.go:165] Checking apiserver status ...
	I0725 13:41:09.979933   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:41:09.990537   62641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:09.990547   62641 api_server.go:165] Checking apiserver status ...
	I0725 13:41:09.990588   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0725 13:41:09.998309   62641 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:09.998319   62641 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0725 13:41:09.998326   62641 kubeadm.go:1092] stopping kube-system containers ...
	I0725 13:41:09.998375   62641 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 13:41:10.028922   62641 docker.go:443] Stopping containers: [2e12297de868 a9ff123b784e 2b1a2f7d82bf 226f6c6e0075 b9f0d14e47ae c538fc012392 2322278f6a58 775d14994103 433c6e459edd 17bf2e8fefb4 c5c26c8d67b9 457e1a17e981 50acabee2bf5 b7b19f1a5a2d 662ac0f86cb7 f89737e8ee78]
	I0725 13:41:10.028995   62641 ssh_runner.go:195] Run: docker stop 2e12297de868 a9ff123b784e 2b1a2f7d82bf 226f6c6e0075 b9f0d14e47ae c538fc012392 2322278f6a58 775d14994103 433c6e459edd 17bf2e8fefb4 c5c26c8d67b9 457e1a17e981 50acabee2bf5 b7b19f1a5a2d 662ac0f86cb7 f89737e8ee78
	I0725 13:41:10.059285   62641 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 13:41:10.069060   62641 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 13:41:10.076097   62641 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jul 25 20:40 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jul 25 20:40 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Jul 25 20:40 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jul 25 20:40 /etc/kubernetes/scheduler.conf
	
	I0725 13:41:10.076143   62641 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 13:41:10.082864   62641 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 13:41:10.089720   62641 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 13:41:10.096488   62641 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:10.096531   62641 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 13:41:10.103289   62641 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 13:41:10.110095   62641 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0725 13:41:10.110147   62641 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 13:41:10.116727   62641 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 13:41:10.123750   62641 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0725 13:41:10.123759   62641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:41:10.167626   62641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:41:11.147058   62641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:41:11.324377   62641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:41:11.369837   62641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:41:11.433701   62641 api_server.go:51] waiting for apiserver process to appear ...
	I0725 13:41:11.433762   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:41:11.946640   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:41:12.445830   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:41:12.944870   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:41:12.956307   62641 api_server.go:71] duration metric: took 1.522564033s to wait for apiserver process to appear ...
	I0725 13:41:12.956326   62641 api_server.go:87] waiting for apiserver healthz status ...
	I0725 13:41:12.956340   62641 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61134/healthz ...
	I0725 13:41:15.637409   62641 api_server.go:266] https://127.0.0.1:61134/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0725 13:41:15.637424   62641 api_server.go:102] status: https://127.0.0.1:61134/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0725 13:41:16.138026   62641 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61134/healthz ...
	I0725 13:41:16.146708   62641 api_server.go:266] https://127.0.0.1:61134/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 13:41:16.146728   62641 api_server.go:102] status: https://127.0.0.1:61134/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 13:41:16.637608   62641 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61134/healthz ...
	I0725 13:41:16.642891   62641 api_server.go:266] https://127.0.0.1:61134/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 13:41:16.642907   62641 api_server.go:102] status: https://127.0.0.1:61134/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 13:41:17.138172   62641 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61134/healthz ...
	I0725 13:41:17.144715   62641 api_server.go:266] https://127.0.0.1:61134/healthz returned 200:
	ok
	I0725 13:41:17.151405   62641 api_server.go:140] control plane version: v1.24.2
	I0725 13:41:17.151419   62641 api_server.go:130] duration metric: took 4.194962023s to wait for apiserver health ...
	I0725 13:41:17.151425   62641 cni.go:95] Creating CNI manager for ""
	I0725 13:41:17.151435   62641 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 13:41:17.151449   62641 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 13:41:17.160753   62641 system_pods.go:59] 9 kube-system pods found
	I0725 13:41:17.160773   62641 system_pods.go:61] "coredns-6d4b75cb6d-hbn6k" [fc055ddb-e646-4d07-b88b-583f467837dd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 13:41:17.160779   62641 system_pods.go:61] "coredns-6d4b75cb6d-x9jqp" [fa2d6b5c-bb0c-4fc9-9443-d933aed66032] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 13:41:17.160785   62641 system_pods.go:61] "etcd-newest-cni-20220725134004-44543" [fadf380e-3d9e-49f1-b1c3-2802743dcb63] Running
	I0725 13:41:17.160789   62641 system_pods.go:61] "kube-apiserver-newest-cni-20220725134004-44543" [4c112752-e4f6-477a-9489-ff1a7b1a92e3] Running
	I0725 13:41:17.160796   62641 system_pods.go:61] "kube-controller-manager-newest-cni-20220725134004-44543" [8a1cdd56-6f7e-4d69-ae6e-8260c02c5acc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0725 13:41:17.160802   62641 system_pods.go:61] "kube-proxy-mm6ph" [820f3eb0-aba6-415b-a884-b67741ece355] Running
	I0725 13:41:17.160807   62641 system_pods.go:61] "kube-scheduler-newest-cni-20220725134004-44543" [06d1a7e9-4b67-47fa-b62c-eebb4e5067fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 13:41:17.160812   62641 system_pods.go:61] "metrics-server-5c6f97fb75-92v57" [32a2b35f-ed5d-4e41-9f58-4b5e119d8bb3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 13:41:17.160817   62641 system_pods.go:61] "storage-provisioner" [d910dec0-c09f-4225-810a-5a5d773f923b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0725 13:41:17.160820   62641 system_pods.go:74] duration metric: took 9.367171ms to wait for pod list to return data ...
	I0725 13:41:17.160826   62641 node_conditions.go:102] verifying NodePressure condition ...
	I0725 13:41:17.164505   62641 node_conditions.go:122] node storage ephemeral capacity is 107077304Ki
	I0725 13:41:17.164523   62641 node_conditions.go:123] node cpu capacity is 6
	I0725 13:41:17.164532   62641 node_conditions.go:105] duration metric: took 3.701498ms to run NodePressure ...
	I0725 13:41:17.164543   62641 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 13:41:17.332026   62641 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 13:41:17.339854   62641 ops.go:34] apiserver oom_adj: -16
	I0725 13:41:17.339874   62641 kubeadm.go:630] restartCluster took 10.457469712s
	I0725 13:41:17.339885   62641 kubeadm.go:397] StartCluster complete in 10.493656318s
	I0725 13:41:17.339902   62641 settings.go:142] acquiring lock: {Name:mk9b5e011e5806157f9a122f8c65a6cc16a3d9e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 13:41:17.339979   62641 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	I0725 13:41:17.340560   62641 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig: {Name:mk33e723b045d7cbb2c524ed31a7e78a4a0b6415 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 13:41:17.343616   62641 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220725134004-44543" rescaled to 1
	I0725 13:41:17.343649   62641 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 13:41:17.343669   62641 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0725 13:41:17.343676   62641 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0725 13:41:17.343824   62641 config.go:178] Loaded profile config "newest-cni-20220725134004-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0725 13:41:17.367695   62641 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220725134004-44543"
	I0725 13:41:17.367523   62641 out.go:177] * Verifying Kubernetes components...
	I0725 13:41:17.367695   62641 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220725134004-44543"
	I0725 13:41:17.367750   62641 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220725134004-44543"
	I0725 13:41:17.367698   62641 addons.go:65] Setting dashboard=true in profile "newest-cni-20220725134004-44543"
	I0725 13:41:17.367839   62641 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220725134004-44543"
	I0725 13:41:17.426431   62641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 13:41:17.426434   62641 addons.go:153] Setting addon dashboard=true in "newest-cni-20220725134004-44543"
	W0725 13:41:17.426475   62641 addons.go:162] addon dashboard should already be in state true
	W0725 13:41:17.426482   62641 addons.go:162] addon storage-provisioner should already be in state true
	I0725 13:41:17.367832   62641 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220725134004-44543"
	I0725 13:41:17.368425   62641 cli_runner.go:164] Run: docker container inspect newest-cni-20220725134004-44543 --format={{.State.Status}}
	I0725 13:41:17.426544   62641 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220725134004-44543"
	I0725 13:41:17.426562   62641 host.go:66] Checking if "newest-cni-20220725134004-44543" exists ...
	W0725 13:41:17.426582   62641 addons.go:162] addon metrics-server should already be in state true
	I0725 13:41:17.426595   62641 host.go:66] Checking if "newest-cni-20220725134004-44543" exists ...
	I0725 13:41:17.426646   62641 host.go:66] Checking if "newest-cni-20220725134004-44543" exists ...
	I0725 13:41:17.430002   62641 cli_runner.go:164] Run: docker container inspect newest-cni-20220725134004-44543 --format={{.State.Status}}
	I0725 13:41:17.430013   62641 cli_runner.go:164] Run: docker container inspect newest-cni-20220725134004-44543 --format={{.State.Status}}
	I0725 13:41:17.430127   62641 cli_runner.go:164] Run: docker container inspect newest-cni-20220725134004-44543 --format={{.State.Status}}
	I0725 13:41:17.441775   62641 start.go:789] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0725 13:41:17.455220   62641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220725134004-44543
	I0725 13:41:17.619130   62641 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0725 13:41:17.564243   62641 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220725134004-44543"
	I0725 13:41:17.578481   62641 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 13:41:17.598496   62641 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0725 13:41:17.600117   62641 api_server.go:51] waiting for apiserver process to appear ...
	W0725 13:41:17.619193   62641 addons.go:162] addon default-storageclass should already be in state true
	I0725 13:41:17.640288   62641 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0725 13:41:17.640305   62641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0725 13:41:17.619216   62641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 13:41:17.640332   62641 host.go:66] Checking if "newest-cni-20220725134004-44543" exists ...
	I0725 13:41:17.677208   62641 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0725 13:41:17.640379   62641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725134004-44543
	I0725 13:41:17.640398   62641 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 13:41:17.640698   62641 cli_runner.go:164] Run: docker container inspect newest-cni-20220725134004-44543 --format={{.State.Status}}
	I0725 13:41:17.654154   62641 api_server.go:71] duration metric: took 310.477127ms to wait for apiserver process to appear ...
	I0725 13:41:17.714129   62641 api_server.go:87] waiting for apiserver healthz status ...
	I0725 13:41:17.714136   62641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 13:41:17.714143   62641 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61134/healthz ...
	I0725 13:41:17.714177   62641 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0725 13:41:17.714187   62641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0725 13:41:17.714219   62641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725134004-44543
	I0725 13:41:17.714247   62641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725134004-44543
	I0725 13:41:17.726411   62641 api_server.go:266] https://127.0.0.1:61134/healthz returned 200:
	ok
	I0725 13:41:17.729015   62641 api_server.go:140] control plane version: v1.24.2
	I0725 13:41:17.729043   62641 api_server.go:130] duration metric: took 14.904203ms to wait for apiserver health ...
	I0725 13:41:17.729051   62641 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 13:41:17.737689   62641 system_pods.go:59] 9 kube-system pods found
	I0725 13:41:17.737718   62641 system_pods.go:61] "coredns-6d4b75cb6d-hbn6k" [fc055ddb-e646-4d07-b88b-583f467837dd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 13:41:17.737728   62641 system_pods.go:61] "coredns-6d4b75cb6d-x9jqp" [fa2d6b5c-bb0c-4fc9-9443-d933aed66032] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 13:41:17.737735   62641 system_pods.go:61] "etcd-newest-cni-20220725134004-44543" [fadf380e-3d9e-49f1-b1c3-2802743dcb63] Running
	I0725 13:41:17.737742   62641 system_pods.go:61] "kube-apiserver-newest-cni-20220725134004-44543" [4c112752-e4f6-477a-9489-ff1a7b1a92e3] Running
	I0725 13:41:17.737755   62641 system_pods.go:61] "kube-controller-manager-newest-cni-20220725134004-44543" [8a1cdd56-6f7e-4d69-ae6e-8260c02c5acc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0725 13:41:17.737761   62641 system_pods.go:61] "kube-proxy-mm6ph" [820f3eb0-aba6-415b-a884-b67741ece355] Running
	I0725 13:41:17.737776   62641 system_pods.go:61] "kube-scheduler-newest-cni-20220725134004-44543" [06d1a7e9-4b67-47fa-b62c-eebb4e5067fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 13:41:17.737788   62641 system_pods.go:61] "metrics-server-5c6f97fb75-92v57" [32a2b35f-ed5d-4e41-9f58-4b5e119d8bb3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 13:41:17.737799   62641 system_pods.go:61] "storage-provisioner" [d910dec0-c09f-4225-810a-5a5d773f923b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0725 13:41:17.737808   62641 system_pods.go:74] duration metric: took 8.751882ms to wait for pod list to return data ...
	I0725 13:41:17.737815   62641 default_sa.go:34] waiting for default service account to be created ...
	I0725 13:41:17.741962   62641 default_sa.go:45] found service account: "default"
	I0725 13:41:17.741978   62641 default_sa.go:55] duration metric: took 4.138262ms for default service account to be created ...
	I0725 13:41:17.741988   62641 kubeadm.go:572] duration metric: took 398.309938ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0725 13:41:17.742005   62641 node_conditions.go:102] verifying NodePressure condition ...
	I0725 13:41:17.747495   62641 node_conditions.go:122] node storage ephemeral capacity is 107077304Ki
	I0725 13:41:17.747515   62641 node_conditions.go:123] node cpu capacity is 6
	I0725 13:41:17.747525   62641 node_conditions.go:105] duration metric: took 5.502342ms to run NodePressure ...
	I0725 13:41:17.747537   62641 start.go:216] waiting for startup goroutines ...
	I0725 13:41:17.830346   62641 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 13:41:17.830368   62641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 13:41:17.830438   62641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220725134004-44543
	I0725 13:41:17.845326   62641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61130 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/newest-cni-20220725134004-44543/id_rsa Username:docker}
	I0725 13:41:17.847941   62641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61130 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/newest-cni-20220725134004-44543/id_rsa Username:docker}
	I0725 13:41:17.850143   62641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61130 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/newest-cni-20220725134004-44543/id_rsa Username:docker}
	I0725 13:41:17.916591   62641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61130 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/newest-cni-20220725134004-44543/id_rsa Username:docker}
	I0725 13:41:18.017591   62641 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0725 13:41:18.017604   62641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0725 13:41:18.021389   62641 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0725 13:41:18.021403   62641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0725 13:41:18.023473   62641 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 13:41:18.040806   62641 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0725 13:41:18.040824   62641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0725 13:41:18.105407   62641 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0725 13:41:18.105424   62641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0725 13:41:18.113954   62641 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 13:41:18.113970   62641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0725 13:41:18.120629   62641 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 13:41:18.126078   62641 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0725 13:41:18.126090   62641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0725 13:41:18.145916   62641 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0725 13:41:18.145948   62641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0725 13:41:18.207466   62641 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 13:41:18.235227   62641 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0725 13:41:18.235244   62641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0725 13:41:18.338414   62641 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0725 13:41:18.338432   62641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0725 13:41:18.433557   62641 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0725 13:41:18.433577   62641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0725 13:41:18.453890   62641 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0725 13:41:18.453908   62641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0725 13:41:18.515844   62641 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0725 13:41:18.515863   62641 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0725 13:41:18.541656   62641 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0725 13:41:19.239622   62641 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.216077847s)
	I0725 13:41:19.239671   62641 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.118958728s)
	I0725 13:41:19.239723   62641 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.032203727s)
	I0725 13:41:19.239744   62641 addons.go:383] Verifying addon metrics-server=true in "newest-cni-20220725134004-44543"
	I0725 13:41:19.449476   62641 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0725 13:41:19.491637   62641 addons.go:414] enableAddons completed in 2.147902561s
	I0725 13:41:19.530210   62641 start.go:506] kubectl: 1.24.1, cluster: 1.24.2 (minor skew: 0)
	I0725 13:41:19.558148   62641 out.go:177] * Done! kubectl is now configured to use "newest-cni-20220725134004-44543" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Mon 2022-07-25 20:41:02 UTC, end at Mon 2022-07-25 20:42:05 UTC. --
	Jul 25 20:41:18 newest-cni-20220725134004-44543 dockerd[587]: time="2022-07-25T20:41:18.528482509Z" level=info msg="ignoring event" container=bfc5fa64427de6a2c1a407c50215720b68c16345284a514ca129856659867cde module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:41:19 newest-cni-20220725134004-44543 dockerd[587]: time="2022-07-25T20:41:19.744048246Z" level=info msg="ignoring event" container=e09a8fe129d532ebfad65f504e1ba5079a85e5955392ecda3a79e215ab7de22f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:41:19 newest-cni-20220725134004-44543 dockerd[587]: time="2022-07-25T20:41:19.744581893Z" level=info msg="ignoring event" container=0ed9e93b90e3f2f5935b4da6a1b7c01b18b29b9ade60bdb01400125bb265771e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:41:20 newest-cni-20220725134004-44543 dockerd[587]: time="2022-07-25T20:41:20.218262876Z" level=info msg="ignoring event" container=fd1f65e140c274a7941fc62acf930b77874286a7ed070a1fba61064d9f1f0e74 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:41:20 newest-cni-20220725134004-44543 dockerd[587]: time="2022-07-25T20:41:20.227067485Z" level=info msg="ignoring event" container=379763923e34a70dcd42778e78a29189f830c36741928c299d7df6c9e6130d62 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:41:21 newest-cni-20220725134004-44543 dockerd[587]: time="2022-07-25T20:41:21.222925529Z" level=info msg="ignoring event" container=6ec6312513ad2671a7c068bf3be4050a9a61db883e4dcb30d92ffae1950d3b24 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:41:21 newest-cni-20220725134004-44543 dockerd[587]: time="2022-07-25T20:41:21.232692239Z" level=info msg="ignoring event" container=4e1be7858fbde211607ca6962fa234d3ccdd02ff185901dcc5333e42bd53b40a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:41:58 newest-cni-20220725134004-44543 dockerd[587]: time="2022-07-25T20:41:58.919115802Z" level=info msg="ignoring event" container=3ada682fb0b79ca242b4698d805696b02fbe3aa04207e1212aa7f00112ebae3d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:41:59 newest-cni-20220725134004-44543 dockerd[587]: time="2022-07-25T20:41:59.391227019Z" level=info msg="ignoring event" container=c2cdc35d21e33e3ae328057568cad22876daa3cf709d841a2415e6a63d2c9fb9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:41:59 newest-cni-20220725134004-44543 dockerd[587]: time="2022-07-25T20:41:59.891662058Z" level=info msg="ignoring event" container=2a3996f544475221137710f67b13fd86ee490024adcbfbadd4c9cfa1cb3042ab module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:42:00 newest-cni-20220725134004-44543 dockerd[587]: time="2022-07-25T20:42:00.309868223Z" level=info msg="ignoring event" container=67de55fd318919f0f886573b388c2c843680c14ab6d0bff83389baf63bbb400f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:42:00 newest-cni-20220725134004-44543 dockerd[587]: time="2022-07-25T20:42:00.593978330Z" level=info msg="ignoring event" container=823c0dec950cc2cac939d15bb31a744b50ca77eb9671f84d33f78c3379721081 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:42:00 newest-cni-20220725134004-44543 dockerd[587]: time="2022-07-25T20:42:00.604517351Z" level=info msg="ignoring event" container=c28c521221e6615ac01276bb1a6aae2ddd0d0731a0ce82aa4d2dd6f8df9a587f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:42:01 newest-cni-20220725134004-44543 dockerd[587]: time="2022-07-25T20:42:01.534493457Z" level=info msg="ignoring event" container=5877234c165137eb5af714adbdf915ecea250b546865ad1c196378b272e5c0ba module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:42:01 newest-cni-20220725134004-44543 dockerd[587]: time="2022-07-25T20:42:01.540581906Z" level=info msg="ignoring event" container=c361592f8c511266cc9acdcea6382f0bda5d45a949f9bfb878cf4d6132f156c2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:42:01 newest-cni-20220725134004-44543 dockerd[587]: time="2022-07-25T20:42:01.568426624Z" level=info msg="ignoring event" container=06fc28bda593e06298904c74c85c1e54d67c1503c48d50b7ae762f93e767d62b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:42:01 newest-cni-20220725134004-44543 dockerd[587]: time="2022-07-25T20:42:01.643749819Z" level=info msg="ignoring event" container=a6d1396b677e4b94764cf70e03240dd21110267cbbbb2348378a72ce9069635f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:42:03 newest-cni-20220725134004-44543 dockerd[587]: time="2022-07-25T20:42:03.921057558Z" level=info msg="ignoring event" container=e8dbd92dfae5d08081d59a5763c3d4c1964670e966456ecc3cadd9c1b5b28fd3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:42:03 newest-cni-20220725134004-44543 dockerd[587]: time="2022-07-25T20:42:03.998995881Z" level=info msg="ignoring event" container=59e52892155758b9dca305eca87e8334d81a40d30a6d17cd1098c74c37835171 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:42:04 newest-cni-20220725134004-44543 dockerd[587]: time="2022-07-25T20:42:04.120533037Z" level=info msg="ignoring event" container=f2c6182fe4c54a5a0ed740214436d593d0cebb8a3f2cf2eb384593377f20f562 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:42:04 newest-cni-20220725134004-44543 dockerd[587]: time="2022-07-25T20:42:04.281148304Z" level=info msg="ignoring event" container=c8072d9701dd61c13174b693d53fd71a9545f5fe3a7fa1c5daeec015c6bfe350 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:42:05 newest-cni-20220725134004-44543 dockerd[587]: time="2022-07-25T20:42:05.603101151Z" level=info msg="ignoring event" container=79e4addc2022aba1a57cc0f0b83eeba5175991314268cf879787f6cd73d4f3e4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:42:05 newest-cni-20220725134004-44543 dockerd[587]: time="2022-07-25T20:42:05.677066343Z" level=info msg="ignoring event" container=c9b78ec9ed176ac2299e1e2bd4fc285f345bb9c4c42b16a5264f706564093266 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:42:05 newest-cni-20220725134004-44543 dockerd[587]: time="2022-07-25T20:42:05.717180100Z" level=info msg="ignoring event" container=45c983d467afcfa580a306bc979837d5a2a20b0fa771cc4f140709dbd5440dae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 20:42:05 newest-cni-20220725134004-44543 dockerd[587]: time="2022-07-25T20:42:05.730243359Z" level=info msg="ignoring event" container=e900ee731cab2331d4ef3f65fb79cf622292f16d5a41e70a937019abafc96a53 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	91cc9473ad1e3       6e38f40d628db       48 seconds ago       Running             storage-provisioner       1                   465ed01bf15c1
	50d6d81d01c10       a634548d10b03       49 seconds ago       Running             kube-proxy                1                   738752b239808
	e08ee5a56041e       d3377ffb7177c       54 seconds ago       Running             kube-apiserver            1                   56b110080611a
	9c5616155d934       34cdf99b1bb3b       54 seconds ago       Running             kube-controller-manager   1                   dc11b4709ec7a
	06a18074b6a6d       5d725196c1f47       54 seconds ago       Running             kube-scheduler            1                   12b42f94fa891
	0e4e695d4faad       aebe758cef4cd       54 seconds ago       Running             etcd                      1                   f16afb1ced95f
	2b1a2f7d82bf9       6e38f40d628db       About a minute ago   Exited              storage-provisioner       0                   226f6c6e00753
	b9f0d14e47ae4       a634548d10b03       About a minute ago   Exited              kube-proxy                0                   c538fc012392a
	433c6e459edd2       5d725196c1f47       About a minute ago   Exited              kube-scheduler            0                   17bf2e8fefb40
	c5c26c8d67b9a       aebe758cef4cd       About a minute ago   Exited              etcd                      0                   b7b19f1a5a2d6
	457e1a17e9813       34cdf99b1bb3b       About a minute ago   Exited              kube-controller-manager   0                   f89737e8ee782
	50acabee2bf57       d3377ffb7177c       About a minute ago   Exited              kube-apiserver            0                   662ac0f86cb71
	
	* 
	* ==> describe nodes <==
	* Name:               newest-cni-20220725134004-44543
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-20220725134004-44543
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a5b59bcfc16aadb787d3d4f0635e06172b98dce6
	                    minikube.k8s.io/name=newest-cni-20220725134004-44543
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_07_25T13_40_32_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 25 Jul 2022 20:40:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-20220725134004-44543
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 25 Jul 2022 20:42:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 25 Jul 2022 20:42:05 +0000   Mon, 25 Jul 2022 20:40:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 25 Jul 2022 20:42:05 +0000   Mon, 25 Jul 2022 20:40:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 25 Jul 2022 20:42:05 +0000   Mon, 25 Jul 2022 20:40:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 25 Jul 2022 20:42:05 +0000   Mon, 25 Jul 2022 20:42:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-20220725134004-44543
	Capacity:
	  cpu:                6
	  ephemeral-storage:  107077304Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  107077304Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 855c6c72c86b4657b3d8c3c774fd7e1d
	  System UUID:                e79cffea-4229-4772-8c90-194a65d25819
	  Boot ID:                    f0b8333d-7eba-4eec-b9ed-6046a2426c0d
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.2
	  Kube-Proxy Version:         v1.24.2
	PodCIDR:                      192.168.0.0/24
	PodCIDRs:                     192.168.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-hbn6k                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     81s
	  kube-system                 etcd-newest-cni-20220725134004-44543                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         93s
	  kube-system                 kube-apiserver-newest-cni-20220725134004-44543             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kube-system                 kube-controller-manager-newest-cni-20220725134004-44543    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 kube-proxy-mm6ph                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	  kube-system                 kube-scheduler-newest-cni-20220725134004-44543             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kube-system                 metrics-server-5c6f97fb75-92v57                            100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         78s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	  kubernetes-dashboard        dashboard-metrics-scraper-dffd48c4c-59xjn                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12s
	  kubernetes-dashboard        kubernetes-dashboard-5fd5574d9f-dzp59                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 79s   kube-proxy       
	  Normal  Starting                 49s   kube-proxy       
	  Normal  Starting                 94s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  94s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  94s   kubelet          Node newest-cni-20220725134004-44543 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    94s   kubelet          Node newest-cni-20220725134004-44543 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     94s   kubelet          Node newest-cni-20220725134004-44543 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           82s   node-controller  Node newest-cni-20220725134004-44543 event: Registered Node newest-cni-20220725134004-44543 in Controller
	  Normal  RegisteredNode           12s   node-controller  Node newest-cni-20220725134004-44543 event: Registered Node newest-cni-20220725134004-44543 in Controller
	  Normal  Starting                 12s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11s   kubelet          Node newest-cni-20220725134004-44543 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11s   kubelet          Node newest-cni-20220725134004-44543 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11s   kubelet          Node newest-cni-20220725134004-44543 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             11s   kubelet          Node newest-cni-20220725134004-44543 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  11s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                1s    kubelet          Node newest-cni-20220725134004-44543 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [0e4e695d4faa] <==
	* {"level":"info","ts":"2022-07-25T20:41:12.455Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2022-07-25T20:41:12.455Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2022-07-25T20:41:12.455Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T20:41:12.455Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T20:41:12.460Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-07-25T20:41:12.460Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-07-25T20:41:12.460Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-07-25T20:41:12.460Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-25T20:41:12.460Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-25T20:41:13.947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2022-07-25T20:41:13.947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2022-07-25T20:41:13.947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-07-25T20:41:13.947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2022-07-25T20:41:13.947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2022-07-25T20:41:13.947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2022-07-25T20:41:13.947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2022-07-25T20:41:13.949Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:newest-cni-20220725134004-44543 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-25T20:41:13.949Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-25T20:41:13.950Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-25T20:41:13.950Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-25T20:41:13.950Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-07-25T20:41:13.951Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-07-25T20:41:13.951Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"warn","ts":"2022-07-25T20:42:02.568Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"135.470793ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-59xjn.17052d2fbbc75a2e\" ","response":"range_response_count:1 size:786"}
	{"level":"info","ts":"2022-07-25T20:42:02.568Z","caller":"traceutil/trace.go:171","msg":"trace[362567834] range","detail":"{range_begin:/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-59xjn.17052d2fbbc75a2e; range_end:; response_count:1; response_revision:523; }","duration":"135.606397ms","start":"2022-07-25T20:42:02.432Z","end":"2022-07-25T20:42:02.568Z","steps":["trace[362567834] 'agreement among raft nodes before linearized reading'  (duration: 95.421928ms)","trace[362567834] 'range keys from in-memory index tree'  (duration: 39.948813ms)"],"step_count":2}
	
	* 
	* ==> etcd [c5c26c8d67b9] <==
	* {"level":"info","ts":"2022-07-25T20:40:27.171Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-07-25T20:40:27.171Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2022-07-25T20:40:27.171Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2022-07-25T20:40:27.171Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-07-25T20:40:27.171Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2022-07-25T20:40:27.171Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-07-25T20:40:27.172Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:newest-cni-20220725134004-44543 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-25T20:40:27.172Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T20:40:27.172Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-25T20:40:27.173Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-25T20:40:27.174Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2022-07-25T20:40:27.174Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T20:40:27.174Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T20:40:27.174Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-25T20:40:27.174Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-25T20:40:27.174Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-07-25T20:40:27.178Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-07-25T20:40:49.155Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-07-25T20:40:49.155Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"newest-cni-20220725134004-44543","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	WARNING: 2022/07/25 20:40:49 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/07/25 20:40:49 [core] grpc: addrConn.createTransport failed to connect to {192.168.76.2:2379 192.168.76.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.76.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-07-25T20:40:49.163Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2022-07-25T20:40:49.165Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-25T20:40:49.166Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-25T20:40:49.166Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"newest-cni-20220725134004-44543","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	* 
	* ==> kernel <==
	*  20:42:07 up  1:23,  0 users,  load average: 2.13, 1.19, 1.13
	Linux newest-cni-20220725134004-44543 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [50acabee2bf5] <==
	* W0725 20:40:50.160327       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:40:50.160342       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:40:50.160354       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:40:50.160357       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:40:50.160360       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:40:50.160378       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:40:50.160390       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:40:50.160417       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:40:50.160433       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:40:50.160456       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:40:50.160467       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:40:50.160474       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:40:50.160489       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:40:50.160490       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:40:50.160494       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:40:50.160508       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:40:50.160509       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:40:50.160514       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:40:50.160526       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:40:50.160527       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:40:50.160530       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:40:50.160543       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:40:50.160558       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:40:50.160572       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0725 20:40:50.160688       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	* 
	* ==> kube-apiserver [e08ee5a56041] <==
	* I0725 20:41:15.748967       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0725 20:41:15.750324       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0725 20:41:15.750368       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0725 20:41:15.768294       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0725 20:41:16.418938       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0725 20:41:16.649973       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0725 20:41:16.779961       1 handler_proxy.go:102] no RequestInfo found in the context
	E0725 20:41:16.780071       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0725 20:41:16.780216       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0725 20:41:16.780073       1 handler_proxy.go:102] no RequestInfo found in the context
	E0725 20:41:16.780315       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0725 20:41:16.781518       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0725 20:41:17.269182       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0725 20:41:17.276403       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0725 20:41:17.321600       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0725 20:41:17.333709       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0725 20:41:17.338504       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0725 20:41:17.343495       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0725 20:41:19.157450       1 controller.go:611] quota admission added evaluator for: namespaces
	I0725 20:41:19.419211       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.101.59.165]
	I0725 20:41:19.430596       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.98.84.73]
	I0725 20:41:54.415351       1 controller.go:611] quota admission added evaluator for: endpoints
	I0725 20:41:54.713208       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0725 20:41:54.717181       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-controller-manager [457e1a17e981] <==
	* I0725 20:40:44.856194       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0725 20:40:44.856244       1 shared_informer.go:262] Caches are synced for namespace
	I0725 20:40:44.861671       1 shared_informer.go:262] Caches are synced for node
	I0725 20:40:44.861707       1 range_allocator.go:173] Starting range CIDR allocator
	I0725 20:40:44.861711       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0725 20:40:44.861717       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0725 20:40:44.865313       1 range_allocator.go:374] Set node newest-cni-20220725134004-44543 PodCIDR to [192.168.0.0/24]
	I0725 20:40:44.866548       1 shared_informer.go:262] Caches are synced for crt configmap
	I0725 20:40:44.954303       1 shared_informer.go:262] Caches are synced for cronjob
	I0725 20:40:45.011751       1 shared_informer.go:262] Caches are synced for resource quota
	I0725 20:40:45.052712       1 shared_informer.go:262] Caches are synced for HPA
	I0725 20:40:45.056434       1 shared_informer.go:262] Caches are synced for resource quota
	I0725 20:40:45.410573       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-mm6ph"
	I0725 20:40:45.473781       1 shared_informer.go:262] Caches are synced for garbage collector
	I0725 20:40:45.473848       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0725 20:40:45.474215       1 shared_informer.go:262] Caches are synced for garbage collector
	I0725 20:40:45.610684       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0725 20:40:45.675138       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-6d4b75cb6d to 1"
	I0725 20:40:45.858336       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-x9jqp"
	I0725 20:40:45.861415       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-hbn6k"
	I0725 20:40:45.879972       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-6d4b75cb6d-x9jqp"
	I0725 20:40:48.409741       1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-5c6f97fb75 to 1"
	I0725 20:40:48.448284       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-5c6f97fb75-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0725 20:40:48.451657       1 replica_set.go:550] sync "kube-system/metrics-server-5c6f97fb75" failed with pods "metrics-server-5c6f97fb75-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0725 20:40:48.456029       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c6f97fb75-92v57"
	
	* 
	* ==> kube-controller-manager [9c5616155d93] <==
	* I0725 20:41:54.703314       1 shared_informer.go:262] Caches are synced for PVC protection
	I0725 20:41:54.705991       1 shared_informer.go:255] Waiting for caches to sync for garbage collector
	I0725 20:41:54.707856       1 shared_informer.go:262] Caches are synced for job
	I0725 20:41:54.708329       1 shared_informer.go:262] Caches are synced for attach detach
	I0725 20:41:54.714047       1 shared_informer.go:262] Caches are synced for node
	I0725 20:41:54.714211       1 range_allocator.go:173] Starting range CIDR allocator
	I0725 20:41:54.714234       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0725 20:41:54.714249       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0725 20:41:54.803508       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-5fd5574d9f to 1"
	I0725 20:41:54.803778       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-dffd48c4c to 1"
	I0725 20:41:54.807936       1 shared_informer.go:262] Caches are synced for namespace
	I0725 20:41:54.808062       1 shared_informer.go:262] Caches are synced for service account
	I0725 20:41:54.809470       1 shared_informer.go:262] Caches are synced for resource quota
	I0725 20:41:54.823530       1 shared_informer.go:262] Caches are synced for HPA
	I0725 20:41:54.824786       1 shared_informer.go:262] Caches are synced for resource quota
	I0725 20:41:54.827404       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0725 20:41:54.842014       1 shared_informer.go:262] Caches are synced for disruption
	I0725 20:41:54.842150       1 disruption.go:371] Sending events to api server.
	I0725 20:41:54.902076       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0725 20:41:54.902417       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5fd5574d9f-dzp59"
	I0725 20:41:54.905995       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-dffd48c4c-59xjn"
	I0725 20:41:55.385220       1 shared_informer.go:262] Caches are synced for garbage collector
	I0725 20:41:55.385274       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0725 20:41:55.385352       1 shared_informer.go:262] Caches are synced for garbage collector
	I0725 20:41:59.596419       1 node_lifecycle_controller.go:1165] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	
	* 
	* ==> kube-proxy [50d6d81d01c1] <==
	* I0725 20:41:17.294317       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0725 20:41:17.294466       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0725 20:41:17.294686       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0725 20:41:17.339904       1 server_others.go:206] "Using iptables Proxier"
	I0725 20:41:17.340007       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0725 20:41:17.340039       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0725 20:41:17.340057       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0725 20:41:17.340179       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0725 20:41:17.340400       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0725 20:41:17.341362       1 server.go:661] "Version info" version="v1.24.2"
	I0725 20:41:17.341455       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 20:41:17.342232       1 config.go:317] "Starting service config controller"
	I0725 20:41:17.342266       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0725 20:41:17.342322       1 config.go:226] "Starting endpoint slice config controller"
	I0725 20:41:17.342327       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0725 20:41:17.342459       1 config.go:444] "Starting node config controller"
	I0725 20:41:17.342537       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0725 20:41:17.443033       1 shared_informer.go:262] Caches are synced for node config
	I0725 20:41:17.443160       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0725 20:41:17.443181       1 shared_informer.go:262] Caches are synced for service config
	
	* 
	* ==> kube-proxy [b9f0d14e47ae] <==
	* I0725 20:40:47.155642       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0725 20:40:47.155714       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0725 20:40:47.155738       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0725 20:40:47.177747       1 server_others.go:206] "Using iptables Proxier"
	I0725 20:40:47.177793       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0725 20:40:47.177803       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0725 20:40:47.177812       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0725 20:40:47.177836       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0725 20:40:47.177979       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0725 20:40:47.178200       1 server.go:661] "Version info" version="v1.24.2"
	I0725 20:40:47.188775       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 20:40:47.189621       1 config.go:317] "Starting service config controller"
	I0725 20:40:47.190511       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0725 20:40:47.190053       1 config.go:226] "Starting endpoint slice config controller"
	I0725 20:40:47.190565       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0725 20:40:47.190317       1 config.go:444] "Starting node config controller"
	I0725 20:40:47.190678       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0725 20:40:47.339601       1 shared_informer.go:262] Caches are synced for node config
	I0725 20:40:47.339768       1 shared_informer.go:262] Caches are synced for service config
	I0725 20:40:47.339854       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [06a18074b6a6] <==
	* W0725 20:41:12.454015       1 feature_gate.go:237] Setting GA feature gate ServerSideApply=true. It will be removed in a future release.
	I0725 20:41:13.333332       1 serving.go:348] Generated self-signed cert in-memory
	W0725 20:41:15.657734       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0725 20:41:15.657799       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0725 20:41:15.657807       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0725 20:41:15.657813       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0725 20:41:15.718402       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.2"
	I0725 20:41:15.719540       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 20:41:15.721374       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0725 20:41:15.721418       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0725 20:41:15.724238       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0725 20:41:15.721445       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0725 20:41:15.824518       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [433c6e459edd] <==
	* E0725 20:40:29.963260       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0725 20:40:29.962955       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0725 20:40:29.963270       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0725 20:40:29.962118       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0725 20:40:29.963293       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0725 20:40:30.792518       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0725 20:40:30.792566       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0725 20:40:30.801233       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0725 20:40:30.801302       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0725 20:40:30.809382       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0725 20:40:30.809453       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0725 20:40:30.828142       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0725 20:40:30.828215       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0725 20:40:30.840920       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0725 20:40:30.840989       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0725 20:40:30.853702       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0725 20:40:30.853772       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0725 20:40:30.899220       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0725 20:40:30.899256       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0725 20:40:30.907205       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0725 20:40:30.907242       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0725 20:40:33.858258       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0725 20:40:49.175633       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0725 20:40:49.176069       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0725 20:40:49.176525       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2022-07-25 20:41:02 UTC, end at Mon 2022-07-25 20:42:09 UTC. --
	Jul 25 20:42:08 newest-cni-20220725134004-44543 kubelet[3883]:         ]
	Jul 25 20:42:08 newest-cni-20220725134004-44543 kubelet[3883]:  > pod="kube-system/metrics-server-5c6f97fb75-92v57"
	Jul 25 20:42:08 newest-cni-20220725134004-44543 kubelet[3883]: E0725 20:42:08.235052    3883 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"metrics-server-5c6f97fb75-92v57_kube-system(32a2b35f-ed5d-4e41-9f58-4b5e119d8bb3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"metrics-server-5c6f97fb75-92v57_kube-system(32a2b35f-ed5d-4e41-9f58-4b5e119d8bb3)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"785bf529b7c1c1cfb21b9ba1741baae8190c9eb16e431cf92e6baaf64a157deb\\\" network for pod \\\"metrics-server-5c6f97fb75-92v57\\\": networkPlugin cni failed to set up pod \\\"metrics-server-5c6f97fb75-92v57_kube-system\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"785bf529b7c1c1cfb21b9ba1741baae8190c9eb16e431cf92e6baaf64a157deb\\\" network for pod \\\"metrics-server-5c6f97fb75-92v57\\\": networkPlugin cni failed to teardown p
od \\\"metrics-server-5c6f97fb75-92v57_kube-system\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.36 -j CNI-84447376fa706e2fb8dab03e -m comment --comment name: \\\"crio\\\" id: \\\"785bf529b7c1c1cfb21b9ba1741baae8190c9eb16e431cf92e6baaf64a157deb\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-84447376fa706e2fb8dab03e':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kube-system/metrics-server-5c6f97fb75-92v57" podUID=32a2b35f-ed5d-4e41-9f58-4b5e119d8bb3
	Jul 25 20:42:08 newest-cni-20220725134004-44543 kubelet[3883]: I0725 20:42:08.242430    3883 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="0e57b45fcb392998409685c71a3b8750118e8f3f74eb70080ddc373304db0cd2"
	Jul 25 20:42:08 newest-cni-20220725134004-44543 kubelet[3883]: I0725 20:42:08.253148    3883 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="dd6822ec48475ad1dd5698a76732374af4d099d107fa60fdb283242ae82dbbd6"
	Jul 25 20:42:08 newest-cni-20220725134004-44543 kubelet[3883]: I0725 20:42:08.280037    3883 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="47a0a60a13dea2416d40d001e5c3f7ef3db2b586185e5eb1c160af928154cefd"
	Jul 25 20:42:08 newest-cni-20220725134004-44543 kubelet[3883]: E0725 20:42:08.926644    3883 remote_runtime.go:212] "RunPodSandbox from runtime service failed" err=<
	Jul 25 20:42:08 newest-cni-20220725134004-44543 kubelet[3883]:         rpc error: code = Unknown desc = [failed to set up sandbox container "5cd59e93af9dc5b50188c2ebb8f4511ba77c4acf2528057b0a434c9e4a3583bc" network for pod "coredns-6d4b75cb6d-hbn6k": networkPlugin cni failed to set up pod "coredns-6d4b75cb6d-hbn6k_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "5cd59e93af9dc5b50188c2ebb8f4511ba77c4acf2528057b0a434c9e4a3583bc" network for pod "coredns-6d4b75cb6d-hbn6k": networkPlugin cni failed to teardown pod "coredns-6d4b75cb6d-hbn6k_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.39 -j CNI-4e6b294749623f5cc466234d -m comment --comment name: "crio" id: "5cd59e93af9dc5b50188c2ebb8f4511ba77c4acf2528057b0a434c9e4a3583bc" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-4e6b294749623f5cc466234d':No such file or directory
	Jul 25 20:42:08 newest-cni-20220725134004-44543 kubelet[3883]:         
	Jul 25 20:42:08 newest-cni-20220725134004-44543 kubelet[3883]:         Try `iptables -h' or 'iptables --help' for more information.
	Jul 25 20:42:08 newest-cni-20220725134004-44543 kubelet[3883]:         ]
	Jul 25 20:42:08 newest-cni-20220725134004-44543 kubelet[3883]:  >
	Jul 25 20:42:08 newest-cni-20220725134004-44543 kubelet[3883]: E0725 20:42:08.926711    3883 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err=<
	Jul 25 20:42:08 newest-cni-20220725134004-44543 kubelet[3883]:         rpc error: code = Unknown desc = [failed to set up sandbox container "5cd59e93af9dc5b50188c2ebb8f4511ba77c4acf2528057b0a434c9e4a3583bc" network for pod "coredns-6d4b75cb6d-hbn6k": networkPlugin cni failed to set up pod "coredns-6d4b75cb6d-hbn6k_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "5cd59e93af9dc5b50188c2ebb8f4511ba77c4acf2528057b0a434c9e4a3583bc" network for pod "coredns-6d4b75cb6d-hbn6k": networkPlugin cni failed to teardown pod "coredns-6d4b75cb6d-hbn6k_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.39 -j CNI-4e6b294749623f5cc466234d -m comment --comment name: "crio" id: "5cd59e93af9dc5b50188c2ebb8f4511ba77c4acf2528057b0a434c9e4a3583bc" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-4e6b294749623f5cc466234d':No such file or directory
	Jul 25 20:42:08 newest-cni-20220725134004-44543 kubelet[3883]:         
	Jul 25 20:42:08 newest-cni-20220725134004-44543 kubelet[3883]:         Try `iptables -h' or 'iptables --help' for more information.
	Jul 25 20:42:08 newest-cni-20220725134004-44543 kubelet[3883]:         ]
	Jul 25 20:42:08 newest-cni-20220725134004-44543 kubelet[3883]:  > pod="kube-system/coredns-6d4b75cb6d-hbn6k"
	Jul 25 20:42:08 newest-cni-20220725134004-44543 kubelet[3883]: E0725 20:42:08.926729    3883 kuberuntime_manager.go:815] "CreatePodSandbox for pod failed" err=<
	Jul 25 20:42:08 newest-cni-20220725134004-44543 kubelet[3883]:         rpc error: code = Unknown desc = [failed to set up sandbox container "5cd59e93af9dc5b50188c2ebb8f4511ba77c4acf2528057b0a434c9e4a3583bc" network for pod "coredns-6d4b75cb6d-hbn6k": networkPlugin cni failed to set up pod "coredns-6d4b75cb6d-hbn6k_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "5cd59e93af9dc5b50188c2ebb8f4511ba77c4acf2528057b0a434c9e4a3583bc" network for pod "coredns-6d4b75cb6d-hbn6k": networkPlugin cni failed to teardown pod "coredns-6d4b75cb6d-hbn6k_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.39 -j CNI-4e6b294749623f5cc466234d -m comment --comment name: "crio" id: "5cd59e93af9dc5b50188c2ebb8f4511ba77c4acf2528057b0a434c9e4a3583bc" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-4e6b294749623f5cc466234d':No such file or directory
	Jul 25 20:42:08 newest-cni-20220725134004-44543 kubelet[3883]:         
	Jul 25 20:42:08 newest-cni-20220725134004-44543 kubelet[3883]:         Try `iptables -h' or 'iptables --help' for more information.
	Jul 25 20:42:08 newest-cni-20220725134004-44543 kubelet[3883]:         ]
	Jul 25 20:42:08 newest-cni-20220725134004-44543 kubelet[3883]:  > pod="kube-system/coredns-6d4b75cb6d-hbn6k"
	Jul 25 20:42:08 newest-cni-20220725134004-44543 kubelet[3883]: E0725 20:42:08.926801    3883 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6d4b75cb6d-hbn6k_kube-system(fc055ddb-e646-4d07-b88b-583f467837dd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6d4b75cb6d-hbn6k_kube-system(fc055ddb-e646-4d07-b88b-583f467837dd)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"5cd59e93af9dc5b50188c2ebb8f4511ba77c4acf2528057b0a434c9e4a3583bc\\\" network for pod \\\"coredns-6d4b75cb6d-hbn6k\\\": networkPlugin cni failed to set up pod \\\"coredns-6d4b75cb6d-hbn6k_kube-system\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"5cd59e93af9dc5b50188c2ebb8f4511ba77c4acf2528057b0a434c9e4a3583bc\\\" network for pod \\\"coredns-6d4b75cb6d-hbn6k\\\": networkPlugin cni failed to teardown pod \\\"coredns-6d4b75cb6d-hbn6k_kub
e-system\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.39 -j CNI-4e6b294749623f5cc466234d -m comment --comment name: \\\"crio\\\" id: \\\"5cd59e93af9dc5b50188c2ebb8f4511ba77c4acf2528057b0a434c9e4a3583bc\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-4e6b294749623f5cc466234d':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kube-system/coredns-6d4b75cb6d-hbn6k" podUID=fc055ddb-e646-4d07-b88b-583f467837dd
	
	* 
	* ==> storage-provisioner [2b1a2f7d82bf] <==
	* I0725 20:40:48.094884       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0725 20:40:48.102698       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0725 20:40:48.102765       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0725 20:40:48.111684       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0725 20:40:48.111831       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bb8b38a7-2150-4f24-ae19-896a5c3a4dfe", APIVersion:"v1", ResourceVersion:"378", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' newest-cni-20220725134004-44543_3e78199b-165f-4cfa-90c3-6e8e5ffdd829 became leader
	I0725 20:40:48.112054       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_newest-cni-20220725134004-44543_3e78199b-165f-4cfa-90c3-6e8e5ffdd829!
	I0725 20:40:48.212259       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_newest-cni-20220725134004-44543_3e78199b-165f-4cfa-90c3-6e8e5ffdd829!
	
	* 
	* ==> storage-provisioner [91cc9473ad1e] <==
	* I0725 20:41:18.339498       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0725 20:41:18.352711       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0725 20:41:18.352752       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0725 20:41:54.417746       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0725 20:41:54.417889       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_newest-cni-20220725134004-44543_6dbe4956-426a-42da-a748-5c0138a9bac3!
	I0725 20:41:54.417873       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bb8b38a7-2150-4f24-ae19-896a5c3a4dfe", APIVersion:"v1", ResourceVersion:"450", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' newest-cni-20220725134004-44543_6dbe4956-426a-42da-a748-5c0138a9bac3 became leader
	I0725 20:41:54.619834       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_newest-cni-20220725134004-44543_6dbe4956-426a-42da-a748-5c0138a9bac3!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220725134004-44543 -n newest-cni-20220725134004-44543
helpers_test.go:261: (dbg) Run:  kubectl --context newest-cni-20220725134004-44543 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-6d4b75cb6d-hbn6k metrics-server-5c6f97fb75-92v57 dashboard-metrics-scraper-dffd48c4c-59xjn kubernetes-dashboard-5fd5574d9f-dzp59
helpers_test.go:272: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context newest-cni-20220725134004-44543 describe pod coredns-6d4b75cb6d-hbn6k metrics-server-5c6f97fb75-92v57 dashboard-metrics-scraper-dffd48c4c-59xjn kubernetes-dashboard-5fd5574d9f-dzp59
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context newest-cni-20220725134004-44543 describe pod coredns-6d4b75cb6d-hbn6k metrics-server-5c6f97fb75-92v57 dashboard-metrics-scraper-dffd48c4c-59xjn kubernetes-dashboard-5fd5574d9f-dzp59: exit status 1 (206.432915ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-6d4b75cb6d-hbn6k" not found
	Error from server (NotFound): pods "metrics-server-5c6f97fb75-92v57" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-dffd48c4c-59xjn" not found
	Error from server (NotFound): pods "kubernetes-dashboard-5fd5574d9f-dzp59" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context newest-cni-20220725134004-44543 describe pod coredns-6d4b75cb6d-hbn6k metrics-server-5c6f97fb75-92v57 dashboard-metrics-scraper-dffd48c4c-59xjn kubernetes-dashboard-5fd5574d9f-dzp59: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (49.91s)

                                                
                                    

Test pass (248/289)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 75.62
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.3
10 TestDownloadOnly/v1.24.2/json-events 7.18
11 TestDownloadOnly/v1.24.2/preload-exists 0
14 TestDownloadOnly/v1.24.2/kubectl 0
15 TestDownloadOnly/v1.24.2/LogsDuration 0.29
16 TestDownloadOnly/DeleteAll 0.74
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.43
18 TestDownloadOnlyKic 6.95
19 TestBinaryMirror 1.68
20 TestOffline 48.55
22 TestAddons/Setup 145.98
26 TestAddons/parallel/MetricsServer 5.52
27 TestAddons/parallel/HelmTiller 13.51
29 TestAddons/parallel/CSI 49.73
30 TestAddons/parallel/Headlamp 10.26
32 TestAddons/serial/GCPAuth 16.23
33 TestAddons/StoppedEnableDisable 12.91
34 TestCertOptions 32.6
35 TestCertExpiration 241.69
36 TestDockerFlags 33.04
37 TestForceSystemdFlag 33.39
38 TestForceSystemdEnv 31.85
40 TestHyperKitDriverInstallOrUpdate 6.44
43 TestErrorSpam/setup 28.61
44 TestErrorSpam/start 2.3
45 TestErrorSpam/status 1.33
46 TestErrorSpam/pause 1.88
47 TestErrorSpam/unpause 1.91
48 TestErrorSpam/stop 13.19
51 TestFunctional/serial/CopySyncFile 0
52 TestFunctional/serial/StartWithProxy 41.63
53 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/SoftStart 39.57
55 TestFunctional/serial/KubeContext 0.03
56 TestFunctional/serial/KubectlGetPods 1.64
59 TestFunctional/serial/CacheCmd/cache/add_remote 4.11
60 TestFunctional/serial/CacheCmd/cache/add_local 1.86
61 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.08
62 TestFunctional/serial/CacheCmd/cache/list 0.07
63 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.45
64 TestFunctional/serial/CacheCmd/cache/cache_reload 2.38
65 TestFunctional/serial/CacheCmd/cache/delete 0.15
66 TestFunctional/serial/MinikubeKubectlCmd 0.5
67 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.64
68 TestFunctional/serial/ExtraConfig 49.74
69 TestFunctional/serial/ComponentHealth 0.05
70 TestFunctional/serial/LogsCmd 3.08
71 TestFunctional/serial/LogsFileCmd 3.15
73 TestFunctional/parallel/ConfigCmd 0.46
75 TestFunctional/parallel/DryRun 2.05
76 TestFunctional/parallel/InternationalLanguage 0.71
77 TestFunctional/parallel/StatusCmd 1.4
80 TestFunctional/parallel/ServiceCmd 13.04
82 TestFunctional/parallel/AddonsCmd 0.29
83 TestFunctional/parallel/PersistentVolumeClaim 25.38
85 TestFunctional/parallel/SSHCmd 1.1
86 TestFunctional/parallel/CpCmd 1.7
87 TestFunctional/parallel/MySQL 20.06
88 TestFunctional/parallel/FileSync 0.44
89 TestFunctional/parallel/CertSync 2.63
93 TestFunctional/parallel/NodeLabels 0.04
95 TestFunctional/parallel/NonActiveRuntimeDisabled 0.42
98 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
100 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.22
101 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
102 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
106 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
107 TestFunctional/parallel/ProfileCmd/profile_not_create 0.62
108 TestFunctional/parallel/ProfileCmd/profile_list 0.58
109 TestFunctional/parallel/ProfileCmd/profile_json_output 0.61
110 TestFunctional/parallel/MountCmd/any-port 9.53
111 TestFunctional/parallel/MountCmd/specific-port 2.93
112 TestFunctional/parallel/DockerEnv/bash 1.79
113 TestFunctional/parallel/Version/short 0.09
114 TestFunctional/parallel/Version/components 0.7
115 TestFunctional/parallel/ImageCommands/ImageListShort 0.33
116 TestFunctional/parallel/ImageCommands/ImageListTable 0.34
117 TestFunctional/parallel/ImageCommands/ImageListJson 0.33
118 TestFunctional/parallel/ImageCommands/ImageListYaml 0.33
119 TestFunctional/parallel/ImageCommands/ImageBuild 3.1
120 TestFunctional/parallel/ImageCommands/Setup 1.91
121 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.29
122 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.87
123 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 4.91
124 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.31
125 TestFunctional/parallel/ImageCommands/ImageRemove 0.69
126 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.7
127 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.53
128 TestFunctional/parallel/UpdateContextCmd/no_changes 0.3
129 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.4
130 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.3
131 TestFunctional/delete_addon-resizer_images 0.17
132 TestFunctional/delete_my-image_image 0.07
133 TestFunctional/delete_minikube_cached_images 0.07
143 TestJSONOutput/start/Command 43.73
144 TestJSONOutput/start/Audit 0
146 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
147 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
149 TestJSONOutput/pause/Command 0.67
150 TestJSONOutput/pause/Audit 0
152 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
153 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
155 TestJSONOutput/unpause/Command 0.69
156 TestJSONOutput/unpause/Audit 0
158 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
159 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
161 TestJSONOutput/stop/Command 12.37
162 TestJSONOutput/stop/Audit 0
164 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
166 TestErrorJSONOutput 0.77
168 TestKicCustomNetwork/create_custom_network 30.2
169 TestKicCustomNetwork/use_default_bridge_network 29.93
170 TestKicExistingNetwork 29.66
171 TestKicCustomSubnet 29.36
172 TestMainNoArgs 0.07
173 TestMinikubeProfile 62.75
176 TestMountStart/serial/StartWithMountFirst 7.69
177 TestMountStart/serial/VerifyMountFirst 0.44
178 TestMountStart/serial/StartWithMountSecond 7.59
179 TestMountStart/serial/VerifyMountSecond 0.45
180 TestMountStart/serial/DeleteFirst 2.32
181 TestMountStart/serial/VerifyMountPostDelete 0.43
182 TestMountStart/serial/Stop 1.63
183 TestMountStart/serial/RestartStopped 5.4
184 TestMountStart/serial/VerifyMountPostStop 0.44
187 TestMultiNode/serial/FreshStart2Nodes 99.1
188 TestMultiNode/serial/DeployApp2Nodes 5.67
189 TestMultiNode/serial/PingHostFrom2Pods 0.86
190 TestMultiNode/serial/AddNode 34.17
191 TestMultiNode/serial/ProfileList 0.57
192 TestMultiNode/serial/CopyFile 16.9
193 TestMultiNode/serial/StopNode 14.14
194 TestMultiNode/serial/StartAfterStop 19.87
195 TestMultiNode/serial/RestartKeepsNodes 131.66
196 TestMultiNode/serial/DeleteNode 18.66
197 TestMultiNode/serial/StopMultiNode 25.05
198 TestMultiNode/serial/RestartMultiNode 58.92
199 TestMultiNode/serial/ValidateNameConflict 31.87
205 TestScheduledStopUnix 101.76
206 TestSkaffold 65.81
208 TestInsufficientStorage 13.08
224 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 7.61
225 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 9.96
226 TestStoppedBinaryUpgrade/Setup 1.38
228 TestStoppedBinaryUpgrade/MinikubeLogs 3.56
230 TestPause/serial/Start 44.37
231 TestPause/serial/SecondStartNoReconfiguration 68.86
232 TestPause/serial/Pause 0.75
242 TestNoKubernetes/serial/StartNoK8sWithVersion 0.37
243 TestNoKubernetes/serial/StartWithK8s 27.6
244 TestNoKubernetes/serial/StartWithStopK8s 17.27
245 TestNoKubernetes/serial/Start 6.81
246 TestNoKubernetes/serial/VerifyK8sNotRunning 0.46
247 TestNoKubernetes/serial/ProfileList 1.58
248 TestNoKubernetes/serial/Stop 1.64
249 TestNoKubernetes/serial/StartNoArgs 4.32
250 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.43
251 TestNetworkPlugins/group/auto/Start 45.62
252 TestNetworkPlugins/group/auto/KubeletFlags 0.47
253 TestNetworkPlugins/group/auto/NetCatPod 12.64
254 TestNetworkPlugins/group/auto/DNS 0.13
255 TestNetworkPlugins/group/auto/Localhost 0.1
256 TestNetworkPlugins/group/auto/HairPin 5.12
257 TestNetworkPlugins/group/kindnet/Start 48.78
258 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
259 TestNetworkPlugins/group/kindnet/KubeletFlags 0.46
260 TestNetworkPlugins/group/kindnet/NetCatPod 11.69
261 TestNetworkPlugins/group/kindnet/DNS 0.12
262 TestNetworkPlugins/group/kindnet/Localhost 0.11
263 TestNetworkPlugins/group/kindnet/HairPin 0.12
264 TestNetworkPlugins/group/cilium/Start 75.36
265 TestNetworkPlugins/group/calico/Start 71.95
266 TestNetworkPlugins/group/cilium/ControllerPod 5.02
267 TestNetworkPlugins/group/cilium/KubeletFlags 0.47
268 TestNetworkPlugins/group/cilium/NetCatPod 11.32
269 TestNetworkPlugins/group/cilium/DNS 0.12
270 TestNetworkPlugins/group/cilium/Localhost 0.11
271 TestNetworkPlugins/group/cilium/HairPin 0.11
272 TestNetworkPlugins/group/false/Start 45.61
273 TestNetworkPlugins/group/calico/ControllerPod 5.02
274 TestNetworkPlugins/group/calico/KubeletFlags 0.47
275 TestNetworkPlugins/group/calico/NetCatPod 11.75
276 TestNetworkPlugins/group/false/KubeletFlags 0.47
277 TestNetworkPlugins/group/false/NetCatPod 11.79
278 TestNetworkPlugins/group/calico/DNS 0.13
279 TestNetworkPlugins/group/calico/Localhost 0.11
280 TestNetworkPlugins/group/calico/HairPin 0.11
281 TestNetworkPlugins/group/false/DNS 0.12
282 TestNetworkPlugins/group/false/Localhost 0.11
283 TestNetworkPlugins/group/false/HairPin 5.11
284 TestNetworkPlugins/group/bridge/Start 55.46
285 TestNetworkPlugins/group/enable-default-cni/Start 46.17
286 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.53
287 TestNetworkPlugins/group/enable-default-cni/NetCatPod 41.75
288 TestNetworkPlugins/group/bridge/KubeletFlags 0.97
289 TestNetworkPlugins/group/bridge/NetCatPod 11.92
290 TestNetworkPlugins/group/bridge/DNS 0.12
291 TestNetworkPlugins/group/bridge/Localhost 0.1
292 TestNetworkPlugins/group/bridge/HairPin 0.12
293 TestNetworkPlugins/group/kubenet/Start 46.05
294 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
295 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
296 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
299 TestNetworkPlugins/group/kubenet/KubeletFlags 0.46
300 TestNetworkPlugins/group/kubenet/NetCatPod 11.79
301 TestNetworkPlugins/group/kubenet/DNS 0.12
302 TestNetworkPlugins/group/kubenet/Localhost 0.11
305 TestStartStop/group/no-preload/serial/FirstStart 92.24
306 TestStartStop/group/no-preload/serial/DeployApp 9.77
307 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.71
308 TestStartStop/group/no-preload/serial/Stop 12.54
309 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.34
310 TestStartStop/group/no-preload/serial/SecondStart 300.05
313 TestStartStop/group/old-k8s-version/serial/Stop 1.62
314 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.34
316 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 8.01
317 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.58
318 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.48
321 TestStartStop/group/embed-certs/serial/FirstStart 45.42
322 TestStartStop/group/embed-certs/serial/DeployApp 9.69
323 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.7
324 TestStartStop/group/embed-certs/serial/Stop 12.5
325 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.34
326 TestStartStop/group/embed-certs/serial/SecondStart 302.99
328 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 12.02
329 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.57
330 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.56
333 TestStartStop/group/default-k8s-different-port/serial/FirstStart 41.78
334 TestStartStop/group/default-k8s-different-port/serial/DeployApp 10.77
335 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.73
336 TestStartStop/group/default-k8s-different-port/serial/Stop 12.54
337 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.33
338 TestStartStop/group/default-k8s-different-port/serial/SecondStart 298.03
339 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 8.02
340 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 6.55
341 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.48
345 TestStartStop/group/newest-cni/serial/FirstStart 42.79
346 TestStartStop/group/newest-cni/serial/DeployApp 0
347 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.88
348 TestStartStop/group/newest-cni/serial/Stop 12.56
349 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.34
350 TestStartStop/group/newest-cni/serial/SecondStart 18.63
351 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
352 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
353 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.58
x
+
TestDownloadOnly/v1.16.0/json-events (75.62s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220725121744-44543 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220725121744-44543 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (1m15.615255394s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (75.62s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20220725121744-44543
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20220725121744-44543: exit status 85 (295.389081ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------|------------------------------------|---------|---------|---------------------|----------|
	| Command |                Args                |              Profile               |  User   | Version |     Start Time      | End Time |
	|---------|------------------------------------|------------------------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only -p         | download-only-20220725121744-44543 | jenkins | v1.26.0 | 25 Jul 22 12:17 PDT |          |
	|         | download-only-20220725121744-44543 |                                    |         |         |                     |          |
	|         | --force --alsologtostderr          |                                    |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0       |                                    |         |         |                     |          |
	|         | --container-runtime=docker         |                                    |         |         |                     |          |
	|         | --driver=docker                    |                                    |         |         |                     |          |
	|---------|------------------------------------|------------------------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/25 12:17:44
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 12:17:44.649411   44545 out.go:296] Setting OutFile to fd 1 ...
	I0725 12:17:44.649618   44545 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 12:17:44.649624   44545 out.go:309] Setting ErrFile to fd 2...
	I0725 12:17:44.649628   44545 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 12:17:44.649730   44545 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/bin
	W0725 12:17:44.649833   44545 root.go:310] Error reading config file at /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/config/config.json: open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/config/config.json: no such file or directory
	I0725 12:17:44.650568   44545 out.go:303] Setting JSON to true
	I0725 12:17:44.667098   44545 start.go:115] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":11836,"bootTime":1658764828,"procs":360,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0725 12:17:44.667208   44545 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0725 12:17:44.692595   44545 out.go:97] [download-only-20220725121744-44543] minikube v1.26.0 on Darwin 12.4
	I0725 12:17:44.692687   44545 notify.go:193] Checking for updates...
	W0725 12:17:44.692701   44545 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/preloaded-tarball: no such file or directory
	I0725 12:17:44.714235   44545 out.go:169] MINIKUBE_LOCATION=14555
	I0725 12:17:44.735440   44545 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	I0725 12:17:44.758612   44545 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 12:17:44.779250   44545 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 12:17:44.800583   44545 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube
	W0725 12:17:44.842200   44545 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0725 12:17:44.842386   44545 driver.go:365] Setting default libvirt URI to qemu:///system
	W0725 12:18:44.235417   44545 docker.go:113] docker version returned error: deadline exceeded running "docker version --format {{.Server.Os}}-{{.Server.Version}}": signal: killed
	I0725 12:18:44.257408   44545 out.go:97] Using the docker driver based on user configuration
	I0725 12:18:44.257427   44545 start.go:284] selected driver: docker
	I0725 12:18:44.257435   44545 start.go:808] validating driver "docker" against <nil>
	I0725 12:18:44.257544   44545 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 12:18:44.387583   44545 info.go:265] docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:fals
e ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib
/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 12:18:44.409826   44545 out.go:169] - Ensure your docker daemon has access to enough CPU/memory resources.
	I0725 12:18:44.431425   44545 out.go:169] - Docs https://docs.docker.com/docker-for-mac/#resources
	I0725 12:18:44.474509   44545 out.go:169] 
	W0725 12:18:44.495527   44545 out_reason.go:110] Requested cpu count 2 is greater than the available cpus of 0
	I0725 12:18:44.516248   44545 out.go:169] 
	I0725 12:18:44.563990   44545 out.go:169] 
	W0725 12:18:44.584923   44545 out_reason.go:110] Docker Desktop has less than 2 CPUs configured, but Kubernetes requires at least 2 to be available
	W0725 12:18:44.585045   44545 out_reason.go:110] Suggestion: 
	
	    1. Click on "Docker for Desktop" menu icon
	    2. Click "Preferences"
	    3. Click "Resources"
	    4. Increase "CPUs" slider bar to 2 or higher
	    5. Click "Apply & Restart"
	W0725 12:18:44.585085   44545 out_reason.go:110] Documentation: https://docs.docker.com/docker-for-mac/#resources
	I0725 12:18:44.606092   44545 out.go:169] 
	I0725 12:18:44.627184   44545 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 12:18:44.751712   44545 info.go:265] docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:fals
e ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib
/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	W0725 12:18:44.773442   44545 out.go:272] docker is currently using the  storage driver, consider switching to overlay2 for better performance
	I0725 12:18:44.773529   44545 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0725 12:18:44.817955   44545 out.go:169] 
	W0725 12:18:44.839225   44545 out_reason.go:110] Docker Desktop only has 0MiB available, less than the required 1800MiB for Kubernetes
	W0725 12:18:44.839295   44545 out_reason.go:110] Suggestion: 
	
	    1. Click on "Docker for Desktop" menu icon
	    2. Click "Preferences"
	    3. Click "Resources"
	    4. Increase "Memory" slider bar to 2.25 GB or higher
	    5. Click "Apply & Restart"
	W0725 12:18:44.839338   44545 out_reason.go:110] Documentation: https://docs.docker.com/docker-for-mac/#resources
	I0725 12:18:44.860039   44545 out.go:169] 
	I0725 12:18:44.902271   44545 out.go:169] 
	W0725 12:18:44.923299   44545 out_reason.go:110] docker only has 0MiB available, less than the required 1800MiB for Kubernetes
	I0725 12:18:44.944010   44545 out.go:169] 
	I0725 12:18:44.965223   44545 start_flags.go:377] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0725 12:18:44.965354   44545 start_flags.go:835] Wait components to verify : map[apiserver:true system_pods:true]
	I0725 12:18:44.986199   44545 out.go:169] Using Docker Desktop driver with root privileges
	I0725 12:18:45.007016   44545 cni.go:95] Creating CNI manager for ""
	I0725 12:18:45.007034   44545 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 12:18:45.007506   44545 start_flags.go:310] config:
	{Name:download-only-20220725121744-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220725121744-44543 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 12:18:45.029204   44545 out.go:97] Starting control plane node download-only-20220725121744-44543 in cluster download-only-20220725121744-44543
	I0725 12:18:45.029235   44545 cache.go:120] Beginning downloading kic base image for docker with docker
	I0725 12:18:45.050204   44545 out.go:97] Pulling base image ...
	I0725 12:18:45.050244   44545 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0725 12:18:45.050282   44545 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0725 12:18:45.051026   44545 cache.go:107] acquiring lock: {Name:mk3cfff2a3ebc7f66abd20c921baf88f4c733b0e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 12:18:45.051449   44545 cache.go:107] acquiring lock: {Name:mkb3d51b20e7c3c1f6f7ae59967d05bc868d7758 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 12:18:45.051342   44545 cache.go:107] acquiring lock: {Name:mk659bfbaa76b406fc1eecd3b30848bf2d585c37 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 12:18:45.051521   44545 cache.go:107] acquiring lock: {Name:mk823110f93115484246c52fa0cdabe1f84c2490 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 12:18:45.051636   44545 cache.go:107] acquiring lock: {Name:mkd6dbd21b2bd4f9414543b719ca093c8b4c7ebc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 12:18:45.051746   44545 cache.go:107] acquiring lock: {Name:mkcd4e31b13b91c4bc92623a77cb7956857fb25f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 12:18:45.051789   44545 cache.go:107] acquiring lock: {Name:mkf17022a8cbde3f65aadde40000c4ec9080bc43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 12:18:45.052234   44545 cache.go:107] acquiring lock: {Name:mkad5d992d7d0ddfedc128f1b3c6827491c37bd5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 12:18:45.052181   44545 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/download-only-20220725121744-44543/config.json ...
	I0725 12:18:45.052377   44545 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/download-only-20220725121744-44543/config.json: {Name:mk84062bf8dd2a68fc5fcefebff23fbf6f1c7bf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 12:18:45.052696   44545 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.16.0
	I0725 12:18:45.052701   44545 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.16.0
	I0725 12:18:45.052707   44545 image.go:134] retrieving image: k8s.gcr.io/etcd:3.3.15-0
	I0725 12:18:45.052697   44545 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.16.0
	I0725 12:18:45.052736   44545 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.2
	I0725 12:18:45.052699   44545 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
	I0725 12:18:45.052682   44545 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.16.0
	I0725 12:18:45.052823   44545 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 12:18:45.052972   44545 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0725 12:18:45.053400   44545 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubelet.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/linux/amd64/v1.16.0/kubelet
	I0725 12:18:45.053399   44545 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubeadm.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/linux/amd64/v1.16.0/kubeadm
	I0725 12:18:45.053408   44545 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/linux/amd64/v1.16.0/kubectl
	I0725 12:18:45.056262   44545 image.go:177] daemon lookup for k8s.gcr.io/pause:3.1: Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory
	I0725 12:18:45.057396   44545 image.go:177] daemon lookup for k8s.gcr.io/coredns:1.6.2: Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory
	I0725 12:18:45.058262   44545 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.16.0: Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory
	I0725 12:18:45.058337   44545 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory
	I0725 12:18:45.058813   44545 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.16.0: Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory
	I0725 12:18:45.058894   44545 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.16.0: Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory
	I0725 12:18:45.059125   44545 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.3.15-0: Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory
	I0725 12:18:45.059314   44545 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.16.0: Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory
	I0725 12:18:45.115097   44545 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 to local cache
	I0725 12:18:45.115304   44545 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local cache directory
	I0725 12:18:45.115425   44545 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 to local cache
	I0725 12:18:45.775628   44545 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1
	I0725 12:18:45.784481   44545 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0725 12:18:45.855302   44545 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0
	I0725 12:18:45.855317   44545 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2
	I0725 12:18:45.909873   44545 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 exists
	I0725 12:18:45.909891   44545 cache.go:96] cache image "k8s.gcr.io/pause:3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1" took 858.26221ms
	I0725 12:18:45.909914   44545 cache.go:80] save to tar file k8s.gcr.io/pause:3.1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 succeeded
	I0725 12:18:45.965371   44545 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0
	I0725 12:18:46.083611   44545 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0
	I0725 12:18:46.083638   44545 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0
	I0725 12:18:46.206975   44545 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0
	I0725 12:18:46.944963   44545 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0725 12:18:46.944978   44545 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.894454493s
	I0725 12:18:46.944991   44545 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0725 12:18:47.997635   44545 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	I0725 12:18:48.351677   44545 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2 exists
	I0725 12:18:48.351694   44545 cache.go:96] cache image "k8s.gcr.io/coredns:1.6.2" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2" took 3.299938453s
	I0725 12:18:48.351703   44545 cache.go:80] save to tar file k8s.gcr.io/coredns:1.6.2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2 succeeded
	I0725 12:18:49.282125   44545 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0 exists
	I0725 12:18:49.282143   44545 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0" took 4.230570217s
	I0725 12:18:49.282152   44545 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0 succeeded
	I0725 12:18:49.798023   44545 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0 exists
	I0725 12:18:49.798043   44545 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0" took 4.746715871s
	I0725 12:18:49.798052   44545 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0 succeeded
	I0725 12:18:50.069527   44545 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0 exists
	I0725 12:18:50.069544   44545 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0" took 5.018931034s
	I0725 12:18:50.069554   44545 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0 succeeded
	I0725 12:18:50.267909   44545 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0 exists
	I0725 12:18:50.267927   44545 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0" took 5.216467798s
	I0725 12:18:50.267936   44545 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0 succeeded
	I0725 12:18:50.988369   44545 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0 exists
	I0725 12:18:50.988395   44545 cache.go:96] cache image "k8s.gcr.io/etcd:3.3.15-0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0" took 5.936767479s
	I0725 12:18:50.988417   44545 cache.go:80] save to tar file k8s.gcr.io/etcd:3.3.15-0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0 succeeded
	I0725 12:18:50.988434   44545 cache.go:87] Successfully saved all images to host disk.
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220725121744-44543"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.2/json-events (7.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.2/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220725121744-44543 --force --alsologtostderr --kubernetes-version=v1.24.2 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220725121744-44543 --force --alsologtostderr --kubernetes-version=v1.24.2 --container-runtime=docker --driver=docker : (7.181311164s)
--- PASS: TestDownloadOnly/v1.24.2/json-events (7.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.2/preload-exists
--- PASS: TestDownloadOnly/v1.24.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.2/kubectl
--- PASS: TestDownloadOnly/v1.24.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.2/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.2/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20220725121744-44543
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20220725121744-44543: exit status 85 (285.105408ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------|------------------------------------|---------|---------|---------------------|----------|
	| Command |                Args                |              Profile               |  User   | Version |     Start Time      | End Time |
	|---------|------------------------------------|------------------------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only -p         | download-only-20220725121744-44543 | jenkins | v1.26.0 | 25 Jul 22 12:17 PDT |          |
	|         | download-only-20220725121744-44543 |                                    |         |         |                     |          |
	|         | --force --alsologtostderr          |                                    |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0       |                                    |         |         |                     |          |
	|         | --container-runtime=docker         |                                    |         |         |                     |          |
	|         | --driver=docker                    |                                    |         |         |                     |          |
	| start   | -o=json --download-only -p         | download-only-20220725121744-44543 | jenkins | v1.26.0 | 25 Jul 22 12:19 PDT |          |
	|         | download-only-20220725121744-44543 |                                    |         |         |                     |          |
	|         | --force --alsologtostderr          |                                    |         |         |                     |          |
	|         | --kubernetes-version=v1.24.2       |                                    |         |         |                     |          |
	|         | --container-runtime=docker         |                                    |         |         |                     |          |
	|         | --driver=docker                    |                                    |         |         |                     |          |
	|---------|------------------------------------|------------------------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/25 12:19:00
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 12:19:00.797705   46100 out.go:296] Setting OutFile to fd 1 ...
	I0725 12:19:00.797881   46100 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 12:19:00.797886   46100 out.go:309] Setting ErrFile to fd 2...
	I0725 12:19:00.797890   46100 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 12:19:00.797996   46100 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/bin
	W0725 12:19:00.798106   46100 root.go:310] Error reading config file at /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/config/config.json: open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/config/config.json: no such file or directory
	I0725 12:19:00.798444   46100 out.go:303] Setting JSON to true
	I0725 12:19:00.813072   46100 start.go:115] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":11912,"bootTime":1658764828,"procs":358,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0725 12:19:00.813178   46100 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0725 12:19:00.835098   46100 out.go:97] [download-only-20220725121744-44543] minikube v1.26.0 on Darwin 12.4
	I0725 12:19:00.835339   46100 notify.go:193] Checking for updates...
	W0725 12:19:00.835362   46100 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/preloaded-tarball: no such file or directory
	I0725 12:19:00.857105   46100 out.go:169] MINIKUBE_LOCATION=14555
	I0725 12:19:00.879135   46100 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	I0725 12:19:00.900984   46100 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 12:19:00.922250   46100 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 12:19:00.944260   46100 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube
	W0725 12:19:00.988080   46100 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0725 12:19:00.988752   46100 config.go:178] Loaded profile config "download-only-20220725121744-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0725 12:19:00.988844   46100 start.go:716] api.Load failed for download-only-20220725121744-44543: filestore "download-only-20220725121744-44543": Docker machine "download-only-20220725121744-44543" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0725 12:19:00.988918   46100 driver.go:365] Setting default libvirt URI to qemu:///system
	W0725 12:19:00.988951   46100 start.go:716] api.Load failed for download-only-20220725121744-44543: filestore "download-only-20220725121744-44543": Docker machine "download-only-20220725121744-44543" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0725 12:19:01.058115   46100 docker.go:137] docker version: linux-20.10.17
	I0725 12:19:01.058234   46100 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 12:19:01.193428   46100 info.go:265] docker info: {ID:DEPZ:X5EQ:JUVI:7TAY:IXPP:M3QT:KGJK:NK3N:YSWM:HAHS:YLUF:RBE6 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:false NGoroutines:45 SystemTime:2022-07-25 19:19:01.116913517 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 12:19:01.215530   46100 out.go:97] Using the docker driver based on existing profile
	I0725 12:19:01.215565   46100 start.go:284] selected driver: docker
	I0725 12:19:01.215577   46100 start.go:808] validating driver "docker" against &{Name:download-only-20220725121744-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220725121744-44543 Name
space:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 12:19:01.215854   46100 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 12:19:01.349717   46100 info.go:265] docker info: {ID:DEPZ:X5EQ:JUVI:7TAY:IXPP:M3QT:KGJK:NK3N:YSWM:HAHS:YLUF:RBE6 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:false NGoroutines:45 SystemTime:2022-07-25 19:19:01.276283115 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 12:19:01.351805   46100 cni.go:95] Creating CNI manager for ""
	I0725 12:19:01.351824   46100 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0725 12:19:01.351836   46100 start_flags.go:310] config:
	{Name:download-only-20220725121744-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:download-only-20220725121744-44543 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 12:19:01.373785   46100 out.go:97] Starting control plane node download-only-20220725121744-44543 in cluster download-only-20220725121744-44543
	I0725 12:19:01.373901   46100 cache.go:120] Beginning downloading kic base image for docker with docker
	I0725 12:19:01.395811   46100 out.go:97] Pulling base image ...
	I0725 12:19:01.395883   46100 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0725 12:19:01.395957   46100 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0725 12:19:01.459314   46100 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 to local cache
	I0725 12:19:01.459562   46100 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local cache directory
	I0725 12:19:01.459578   46100 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local cache directory, skipping pull
	I0725 12:19:01.459584   46100 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in cache, skipping pull
	I0725 12:19:01.459591   46100 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 as a tarball
	I0725 12:19:01.461796   46100 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.2/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4
	I0725 12:19:01.461807   46100 cache.go:57] Caching tarball of preloaded images
	I0725 12:19:01.461978   46100 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0725 12:19:01.483914   46100 out.go:97] Downloading Kubernetes v1.24.2 preload ...
	I0725 12:19:01.483986   46100 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4 ...
	I0725 12:19:01.600804   46100 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.2/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4?checksum=md5:015c5bcd220ede3ee64238beb9734721 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220725121744-44543"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.24.2/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.74s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.74s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.43s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-20220725121744-44543
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.43s)

                                                
                                    
x
+
TestDownloadOnlyKic (6.95s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-20220725121909-44543 --force --alsologtostderr --driver=docker 
aaa_download_only_test.go:228: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p download-docker-20220725121909-44543 --force --alsologtostderr --driver=docker : (5.792165s)
helpers_test.go:175: Cleaning up "download-docker-20220725121909-44543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-20220725121909-44543
--- PASS: TestDownloadOnlyKic (6.95s)

                                                
                                    
x
+
TestBinaryMirror (1.68s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-20220725121916-44543 --alsologtostderr --binary-mirror http://127.0.0.1:63373 --driver=docker 
aaa_download_only_test.go:310: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p binary-mirror-20220725121916-44543 --alsologtostderr --binary-mirror http://127.0.0.1:63373 --driver=docker : (1.012551544s)
helpers_test.go:175: Cleaning up "binary-mirror-20220725121916-44543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-20220725121916-44543
--- PASS: TestBinaryMirror (1.68s)

                                                
                                    
x
+
TestOffline (48.55s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-20220725125922-44543 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-20220725125922-44543 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : (45.697637783s)
helpers_test.go:175: Cleaning up "offline-docker-20220725125922-44543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-20220725125922-44543
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-20220725125922-44543: (2.850363154s)
--- PASS: TestOffline (48.55s)

                                                
                                    
x
+
TestAddons/Setup (145.98s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-20220725121918-44543 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:76: (dbg) Done: out/minikube-darwin-amd64 start -p addons-20220725121918-44543 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m25.982662904s)
--- PASS: TestAddons/Setup (145.98s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.52s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:359: metrics-server stabilized in 2.187653ms
addons_test.go:361: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:342: "metrics-server-8595bd7d4c-mzb9f" [7357f4b6-3546-44bd-88c1-1d62547d8a99] Running
addons_test.go:361: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.009010053s
addons_test.go:367: (dbg) Run:  kubectl --context addons-20220725121918-44543 top pods -n kube-system
addons_test.go:384: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220725121918-44543 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.52s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (13.51s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:408: tiller-deploy stabilized in 2.542309ms
addons_test.go:410: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:342: "tiller-deploy-c7d76457b-rn4hq" [325c7737-0ddf-4233-a291-0052213988f8] Running
addons_test.go:410: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.00765269s
addons_test.go:425: (dbg) Run:  kubectl --context addons-20220725121918-44543 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:425: (dbg) Done: kubectl --context addons-20220725121918-44543 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.986671416s)
addons_test.go:442: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220725121918-44543 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (13.51s)

                                                
                                    
x
+
TestAddons/parallel/CSI (49.73s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:513: csi-hostpath-driver pods stabilized in 4.167829ms
addons_test.go:516: (dbg) Run:  kubectl --context addons-20220725121918-44543 create -f testdata/csi-hostpath-driver/pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:516: (dbg) Done: kubectl --context addons-20220725121918-44543 create -f testdata/csi-hostpath-driver/pvc.yaml: (2.949756692s)
addons_test.go:521: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220725121918-44543 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:526: (dbg) Run:  kubectl --context addons-20220725121918-44543 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:531: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [64a5770a-b1a0-4ce3-9570-43851d18ef46] Pending
helpers_test.go:342: "task-pv-pod" [64a5770a-b1a0-4ce3-9570-43851d18ef46] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [64a5770a-b1a0-4ce3-9570-43851d18ef46] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:531: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 23.005881958s
addons_test.go:536: (dbg) Run:  kubectl --context addons-20220725121918-44543 create -f testdata/csi-hostpath-driver/snapshot.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:541: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220725121918-44543 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220725121918-44543 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:546: (dbg) Run:  kubectl --context addons-20220725121918-44543 delete pod task-pv-pod
addons_test.go:552: (dbg) Run:  kubectl --context addons-20220725121918-44543 delete pvc hpvc
addons_test.go:558: (dbg) Run:  kubectl --context addons-20220725121918-44543 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:563: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220725121918-44543 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:568: (dbg) Run:  kubectl --context addons-20220725121918-44543 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:573: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [ff653c06-ef2a-463e-a59a-ebc15d64bf38] Pending
helpers_test.go:342: "task-pv-pod-restore" [ff653c06-ef2a-463e-a59a-ebc15d64bf38] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod-restore" [ff653c06-ef2a-463e-a59a-ebc15d64bf38] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:573: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 12.006673356s
addons_test.go:578: (dbg) Run:  kubectl --context addons-20220725121918-44543 delete pod task-pv-pod-restore
addons_test.go:582: (dbg) Run:  kubectl --context addons-20220725121918-44543 delete pvc hpvc-restore
addons_test.go:586: (dbg) Run:  kubectl --context addons-20220725121918-44543 delete volumesnapshot new-snapshot-demo
addons_test.go:590: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220725121918-44543 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:590: (dbg) Done: out/minikube-darwin-amd64 -p addons-20220725121918-44543 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.781824707s)
addons_test.go:594: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220725121918-44543 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (49.73s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (10.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:737: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-20220725121918-44543 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:737: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-20220725121918-44543 --alsologtostderr -v=1: (1.250693021s)
addons_test.go:742: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:342: "headlamp-866f5bd7bc-748fh" [664e4dde-2ed2-4130-8eb0-b700e2d3242f] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:342: "headlamp-866f5bd7bc-748fh" [664e4dde-2ed2-4130-8eb0-b700e2d3242f] Running

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:742: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.010280381s
--- PASS: TestAddons/parallel/Headlamp (10.26s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (16.23s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:605: (dbg) Run:  kubectl --context addons-20220725121918-44543 create -f testdata/busybox.yaml
addons_test.go:612: (dbg) Run:  kubectl --context addons-20220725121918-44543 create sa gcp-auth-test
addons_test.go:618: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [f6bf2d3c-a6af-473b-99b2-657182388684] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [f6bf2d3c-a6af-473b-99b2-657182388684] Running
addons_test.go:618: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 9.009407661s
addons_test.go:624: (dbg) Run:  kubectl --context addons-20220725121918-44543 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:636: (dbg) Run:  kubectl --context addons-20220725121918-44543 describe sa gcp-auth-test
addons_test.go:650: (dbg) Run:  kubectl --context addons-20220725121918-44543 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:674: (dbg) Run:  kubectl --context addons-20220725121918-44543 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:687: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220725121918-44543 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:687: (dbg) Done: out/minikube-darwin-amd64 -p addons-20220725121918-44543 addons disable gcp-auth --alsologtostderr -v=1: (6.661368736s)
--- PASS: TestAddons/serial/GCPAuth (16.23s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.91s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:134: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-20220725121918-44543
addons_test.go:134: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-20220725121918-44543: (12.526249966s)
addons_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-20220725121918-44543
addons_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-20220725121918-44543
--- PASS: TestAddons/StoppedEnableDisable (12.91s)

                                                
                                    
x
+
TestCertOptions (32.6s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-20220725130052-44543 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-20220725130052-44543 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost: (28.835456306s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-20220725130052-44543 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-20220725130052-44543 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-20220725130052-44543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-20220725130052-44543
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-20220725130052-44543: (2.765762923s)
--- PASS: TestCertOptions (32.60s)

                                                
                                    
x
+
TestCertExpiration (241.69s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-20220725130044-44543 --memory=2048 --cert-expiration=3m --driver=docker 

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-20220725130044-44543 --memory=2048 --cert-expiration=3m --driver=docker : (32.067758968s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-20220725130044-44543 --memory=2048 --cert-expiration=8760h --driver=docker 
E0725 13:04:16.479941   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/skaffold-20220725125803-44543/client.crt: no such file or directory
E0725 13:04:36.957535   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/skaffold-20220725125803-44543/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-20220725130044-44543 --memory=2048 --cert-expiration=8760h --driver=docker : (26.881637625s)
helpers_test.go:175: Cleaning up "cert-expiration-20220725130044-44543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-20220725130044-44543
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-20220725130044-44543: (2.735982346s)
--- PASS: TestCertExpiration (241.69s)

                                                
                                    
x
+
TestDockerFlags (33.04s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-20220725130019-44543 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-20220725130019-44543 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : (28.930275511s)
docker_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-20220725130019-44543 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-20220725130019-44543 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-20220725130019-44543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-20220725130019-44543
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-20220725130019-44543: (2.950735767s)
--- PASS: TestDockerFlags (33.04s)

                                                
                                    
x
+
TestForceSystemdFlag (33.39s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-20220725130010-44543 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-20220725130010-44543 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : (29.912034029s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-20220725130010-44543 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-20220725130010-44543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-20220725130010-44543
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-20220725130010-44543: (2.874289012s)
--- PASS: TestForceSystemdFlag (33.39s)

                                                
                                    
x
+
TestForceSystemdEnv (31.85s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-20220725125947-44543 --memory=2048 --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-20220725125947-44543 --memory=2048 --alsologtostderr -v=5 --driver=docker : (28.340287714s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-20220725125947-44543 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-20220725125947-44543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-20220725125947-44543
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-20220725125947-44543: (2.908834934s)
--- PASS: TestForceSystemdEnv (31.85s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (6.44s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
E0725 12:59:47.489283   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/addons-20220725121918-44543/client.crt: no such file or directory
--- PASS: TestHyperKitDriverInstallOrUpdate (6.44s)

                                                
                                    
x
+
TestErrorSpam/setup (28.61s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-20220725122316-44543 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220725122316-44543 --driver=docker 
error_spam_test.go:78: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-20220725122316-44543 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220725122316-44543 --driver=docker : (28.605922571s)
--- PASS: TestErrorSpam/setup (28.61s)

                                                
                                    
x
+
TestErrorSpam/start (2.3s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:213: Cleaning up 1 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220725122316-44543 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220725122316-44543 start --dry-run
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220725122316-44543 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220725122316-44543 start --dry-run
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220725122316-44543 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220725122316-44543 start --dry-run
--- PASS: TestErrorSpam/start (2.30s)

                                                
                                    
x
+
TestErrorSpam/status (1.33s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220725122316-44543 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220725122316-44543 status
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220725122316-44543 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220725122316-44543 status
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220725122316-44543 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220725122316-44543 status
--- PASS: TestErrorSpam/status (1.33s)

                                                
                                    
x
+
TestErrorSpam/pause (1.88s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220725122316-44543 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220725122316-44543 pause
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220725122316-44543 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220725122316-44543 pause
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220725122316-44543 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220725122316-44543 pause
--- PASS: TestErrorSpam/pause (1.88s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.91s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220725122316-44543 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220725122316-44543 unpause
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220725122316-44543 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220725122316-44543 unpause
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220725122316-44543 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220725122316-44543 unpause
--- PASS: TestErrorSpam/unpause (1.91s)

                                                
                                    
x
+
TestErrorSpam/stop (13.19s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220725122316-44543 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220725122316-44543 stop
error_spam_test.go:156: (dbg) Done: out/minikube-darwin-amd64 -p nospam-20220725122316-44543 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220725122316-44543 stop: (12.521153774s)
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220725122316-44543 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220725122316-44543 stop
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220725122316-44543 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220725122316-44543 stop
--- PASS: TestErrorSpam/stop (13.19s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1781: local sync path: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/test/nested/copy/44543/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (41.63s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2160: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220725122408-44543 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2160: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20220725122408-44543 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (41.634136214s)
--- PASS: TestFunctional/serial/StartWithProxy (41.63s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.57s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:651: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220725122408-44543 --alsologtostderr -v=8
functional_test.go:651: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20220725122408-44543 --alsologtostderr -v=8: (39.569876671s)
functional_test.go:655: soft start took 39.570403735s for "functional-20220725122408-44543" cluster.
--- PASS: TestFunctional/serial/SoftStart (39.57s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:673: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (1.64s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:688: (dbg) Run:  kubectl --context functional-20220725122408-44543 get po -A
functional_test.go:688: (dbg) Done: kubectl --context functional-20220725122408-44543 get po -A: (1.64267377s)
--- PASS: TestFunctional/serial/KubectlGetPods (1.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1041: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 cache add k8s.gcr.io/pause:3.1
functional_test.go:1041: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220725122408-44543 cache add k8s.gcr.io/pause:3.1: (1.06910309s)
functional_test.go:1041: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 cache add k8s.gcr.io/pause:3.3
functional_test.go:1041: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220725122408-44543 cache add k8s.gcr.io/pause:3.3: (1.578269802s)
functional_test.go:1041: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 cache add k8s.gcr.io/pause:latest
functional_test.go:1041: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220725122408-44543 cache add k8s.gcr.io/pause:latest: (1.461318869s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.86s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1069: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20220725122408-44543 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialCacheCmdcacheadd_local1973519183/001
functional_test.go:1081: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 cache add minikube-local-cache-test:functional-20220725122408-44543
functional_test.go:1081: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220725122408-44543 cache add minikube-local-cache-test:functional-20220725122408-44543: (1.343913619s)
functional_test.go:1086: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 cache delete minikube-local-cache-test:functional-20220725122408-44543
functional_test.go:1075: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20220725122408-44543
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.86s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1116: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1139: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1145: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1145: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220725122408-44543 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (426.868795ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1150: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 cache reload
functional_test.go:1150: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220725122408-44543 cache reload: (1.040157579s)
functional_test.go:1155: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.38s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1164: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1164: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:708: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 kubectl -- --context functional-20220725122408-44543 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.50s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:733: (dbg) Run:  out/kubectl --context functional-20220725122408-44543 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.64s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (49.74s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:749: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220725122408-44543 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:749: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20220725122408-44543 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (49.741830468s)
functional_test.go:753: restart took 49.741991655s for "functional-20220725122408-44543" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (49.74s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:802: (dbg) Run:  kubectl --context functional-20220725122408-44543 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:817: etcd phase: Running
functional_test.go:827: etcd status: Ready
functional_test.go:817: kube-apiserver phase: Running
functional_test.go:827: kube-apiserver status: Ready
functional_test.go:817: kube-controller-manager phase: Running
functional_test.go:827: kube-controller-manager status: Ready
functional_test.go:817: kube-scheduler phase: Running
functional_test.go:827: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1228: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 logs
functional_test.go:1228: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220725122408-44543 logs: (3.083886045s)
--- PASS: TestFunctional/serial/LogsCmd (3.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.15s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1242: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialLogsFileCmd2676851127/001/logs.txt
functional_test.go:1242: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220725122408-44543 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialLogsFileCmd2676851127/001/logs.txt: (3.152806535s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.15s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 config get cpus
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220725122408-44543 config get cpus: exit status 14 (51.711461ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 config set cpus 2
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 config get cpus
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 config unset cpus
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 config get cpus
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220725122408-44543 config get cpus: exit status 14 (53.232854ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (2.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220725122408-44543 --dry-run --memory 250MB --alsologtostderr --driver=docker 

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-20220725122408-44543 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (922.207474ms)

                                                
                                                
-- stdout --
	* [functional-20220725122408-44543] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14555
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 12:27:15.255359   48017 out.go:296] Setting OutFile to fd 1 ...
	I0725 12:27:15.276017   48017 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 12:27:15.276028   48017 out.go:309] Setting ErrFile to fd 2...
	I0725 12:27:15.276035   48017 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 12:27:15.276188   48017 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/bin
	I0725 12:27:15.300306   48017 out.go:303] Setting JSON to false
	I0725 12:27:15.317921   48017 start.go:115] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":12407,"bootTime":1658764828,"procs":352,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0725 12:27:15.317997   48017 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0725 12:27:15.378056   48017 out.go:177] * [functional-20220725122408-44543] minikube v1.26.0 on Darwin 12.4
	I0725 12:27:15.462772   48017 out.go:177]   - MINIKUBE_LOCATION=14555
	I0725 12:27:15.520539   48017 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	I0725 12:27:15.562465   48017 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 12:27:15.604487   48017 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 12:27:15.678837   48017 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube
	I0725 12:27:15.700949   48017 config.go:178] Loaded profile config "functional-20220725122408-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0725 12:27:15.701410   48017 driver.go:365] Setting default libvirt URI to qemu:///system
	I0725 12:27:15.784921   48017 docker.go:137] docker version: linux-20.10.17
	I0725 12:27:15.785074   48017 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 12:27:15.944650   48017 info.go:265] docker info: {ID:DEPZ:X5EQ:JUVI:7TAY:IXPP:M3QT:KGJK:NK3N:YSWM:HAHS:YLUF:RBE6 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 19:27:15.866501185 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 12:27:15.966576   48017 out.go:177] * Using the docker driver based on existing profile
	I0725 12:27:15.987315   48017 start.go:284] selected driver: docker
	I0725 12:27:15.987335   48017 start.go:808] validating driver "docker" against &{Name:functional-20220725122408-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:functional-20220725122408-44543 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-pol
icy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 12:27:15.987449   48017 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 12:27:16.010423   48017 out.go:177] 
	W0725 12:27:16.031456   48017 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0725 12:27:16.052210   48017 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:983: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220725122408-44543 --dry-run --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:983: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20220725122408-44543 --dry-run --alsologtostderr -v=1 --driver=docker : (1.127361609s)
--- PASS: TestFunctional/parallel/DryRun (2.05s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1012: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220725122408-44543 --dry-run --memory 250MB --alsologtostderr --driver=docker 

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1012: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-20220725122408-44543 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (713.88765ms)

                                                
                                                
-- stdout --
	* [functional-20220725122408-44543] minikube v1.26.0 sur Darwin 12.4
	  - MINIKUBE_LOCATION=14555
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 12:27:14.506521   47992 out.go:296] Setting OutFile to fd 1 ...
	I0725 12:27:14.506672   47992 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 12:27:14.506677   47992 out.go:309] Setting ErrFile to fd 2...
	I0725 12:27:14.506681   47992 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 12:27:14.506810   47992 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/bin
	I0725 12:27:14.507254   47992 out.go:303] Setting JSON to false
	I0725 12:27:14.523619   47992 start.go:115] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":12406,"bootTime":1658764828,"procs":352,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0725 12:27:14.523740   47992 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0725 12:27:14.545892   47992 out.go:177] * [functional-20220725122408-44543] minikube v1.26.0 sur Darwin 12.4
	I0725 12:27:14.609442   47992 out.go:177]   - MINIKUBE_LOCATION=14555
	I0725 12:27:14.652323   47992 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	I0725 12:27:14.694413   47992 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0725 12:27:14.715327   47992 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 12:27:14.757641   47992 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube
	I0725 12:27:14.779637   47992 config.go:178] Loaded profile config "functional-20220725122408-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0725 12:27:14.780048   47992 driver.go:365] Setting default libvirt URI to qemu:///system
	I0725 12:27:14.852044   47992 docker.go:137] docker version: linux-20.10.17
	I0725 12:27:14.852181   47992 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 12:27:14.997087   47992 info.go:265] docker info: {ID:DEPZ:X5EQ:JUVI:7TAY:IXPP:M3QT:KGJK:NK3N:YSWM:HAHS:YLUF:RBE6 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-25 19:27:14.926306873 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.1] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.7] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0725 12:27:15.039812   47992 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0725 12:27:15.060893   47992 start.go:284] selected driver: docker
	I0725 12:27:15.060919   47992 start.go:808] validating driver "docker" against &{Name:functional-20220725122408-44543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:functional-20220725122408-44543 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-pol
icy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0725 12:27:15.061127   47992 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 12:27:15.086074   47992 out.go:177] 
	W0725 12:27:15.108082   47992 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0725 12:27:15.129592   47992 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:846: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 status
functional_test.go:852: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:864: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (13.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1432: (dbg) Run:  kubectl --context functional-20220725122408-44543 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1438: (dbg) Run:  kubectl --context functional-20220725122408-44543 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-54c4b5c49f-9nl97" [fc45db03-9a8a-4733-80bb-7323d6d80c01] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:342: "hello-node-54c4b5c49f-9nl97" [fc45db03-9a8a-4733-80bb-7323d6d80c01] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 6.007578924s
functional_test.go:1448: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1462: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 service --namespace=default --https --url hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1462: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220725122408-44543 service --namespace=default --https --url hello-node: (2.025680783s)
functional_test.go:1475: found endpoint: https://127.0.0.1:64939
functional_test.go:1490: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 service hello-node --url --format={{.IP}}

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1490: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220725122408-44543 service hello-node --url --format={{.IP}}: (2.02717957s)
functional_test.go:1504: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 service hello-node --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1504: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220725122408-44543 service hello-node --url: (2.030150889s)
functional_test.go:1510: found endpoint for hello-node: http://127.0.0.1:65005
--- PASS: TestFunctional/parallel/ServiceCmd (13.04s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1619: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 addons list
E0725 12:26:49.538182   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/addons-20220725121918-44543/client.crt: no such file or directory
functional_test.go:1631: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [b2696217-ef00-4274-849b-fa840a4fc6c0] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.007828039s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-20220725122408-44543 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-20220725122408-44543 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20220725122408-44543 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220725122408-44543 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [d9fa7817-96de-4fb5-af70-1bd6ae8aa897] Pending
E0725 12:26:44.415922   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/addons-20220725121918-44543/client.crt: no such file or directory
E0725 12:26:44.422486   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/addons-20220725121918-44543/client.crt: no such file or directory
E0725 12:26:44.433356   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/addons-20220725121918-44543/client.crt: no such file or directory
E0725 12:26:44.453439   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/addons-20220725121918-44543/client.crt: no such file or directory
E0725 12:26:44.493826   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/addons-20220725121918-44543/client.crt: no such file or directory
E0725 12:26:44.574604   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/addons-20220725121918-44543/client.crt: no such file or directory
E0725 12:26:44.734807   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/addons-20220725121918-44543/client.crt: no such file or directory
E0725 12:26:45.055236   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/addons-20220725121918-44543/client.crt: no such file or directory
helpers_test.go:342: "sp-pod" [d9fa7817-96de-4fb5-af70-1bd6ae8aa897] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0725 12:26:45.697526   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/addons-20220725121918-44543/client.crt: no such file or directory
E0725 12:26:46.977860   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/addons-20220725121918-44543/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [d9fa7817-96de-4fb5-af70-1bd6ae8aa897] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.008388886s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-20220725122408-44543 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-20220725122408-44543 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220725122408-44543 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [03c15ec8-bcae-445f-88cb-64f819f60889] Pending
helpers_test.go:342: "sp-pod" [03c15ec8-bcae-445f-88cb-64f819f60889] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:342: "sp-pod" [03c15ec8-bcae-445f-88cb-64f819f60889] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.00849043s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-20220725122408-44543 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.38s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1654: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1671: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 ssh -n functional-20220725122408-44543 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 cp functional-20220725122408-44543:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelCpCmd2392516869/001/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 ssh -n functional-20220725122408-44543 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (20.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1719: (dbg) Run:  kubectl --context functional-20220725122408-44543 replace --force -f testdata/mysql.yaml
functional_test.go:1725: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:342: "mysql-67f7d69d8b-d7b9b" [f9a08f1f-9e8c-4f26-ad75-3a9f795b6aa3] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:342: "mysql-67f7d69d8b-d7b9b" [f9a08f1f-9e8c-4f26-ad75-3a9f795b6aa3] Running
functional_test.go:1725: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 17.008027525s
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220725122408-44543 exec mysql-67f7d69d8b-d7b9b -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220725122408-44543 exec mysql-67f7d69d8b-d7b9b -- mysql -ppassword -e "show databases;": exit status 1 (141.612989ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220725122408-44543 exec mysql-67f7d69d8b-d7b9b -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220725122408-44543 exec mysql-67f7d69d8b-d7b9b -- mysql -ppassword -e "show databases;": exit status 1 (102.924936ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220725122408-44543 exec mysql-67f7d69d8b-d7b9b -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (20.06s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1855: Checking for existence of /etc/test/nested/copy/44543/hosts within VM
functional_test.go:1857: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 ssh "sudo cat /etc/test/nested/copy/44543/hosts"
functional_test.go:1862: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1898: Checking for existence of /etc/ssl/certs/44543.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 ssh "sudo cat /etc/ssl/certs/44543.pem"
functional_test.go:1898: Checking for existence of /usr/share/ca-certificates/44543.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 ssh "sudo cat /usr/share/ca-certificates/44543.pem"
functional_test.go:1898: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1899: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1925: Checking for existence of /etc/ssl/certs/445432.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 ssh "sudo cat /etc/ssl/certs/445432.pem"
functional_test.go:1925: Checking for existence of /usr/share/ca-certificates/445432.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 ssh "sudo cat /usr/share/ca-certificates/445432.pem"
functional_test.go:1925: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1926: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.63s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:214: (dbg) Run:  kubectl --context functional-20220725122408-44543 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 ssh "sudo systemctl is-active crio"
functional_test.go:1953: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220725122408-44543 ssh "sudo systemctl is-active crio": exit status 1 (420.933358ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-20220725122408-44543 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-20220725122408-44543 apply -f testdata/testsvc.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:342: "nginx-svc" [e0408003-5b0e-43b5-8751-b2480b34912b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [e0408003-5b0e-43b5-8751-b2480b34912b] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.017856626s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220725122408-44543 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-darwin-amd64 -p functional-20220725122408-44543 tunnel --alsologtostderr] ...
helpers_test.go:500: unable to terminate pid 47738: operation not permitted
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1265: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1305: (dbg) Run:  out/minikube-darwin-amd64 profile list
E0725 12:27:04.901948   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/addons-20220725121918-44543/client.crt: no such file or directory
functional_test.go:1310: Took "501.720368ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1319: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1324: Took "75.253735ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1356: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1361: Took "492.816095ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1369: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1374: Took "119.046147ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:66: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-20220725122408-44543 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port2767922898/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:100: wrote "test-1658777225971628000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port2767922898/001/created-by-test
functional_test_mount_test.go:100: wrote "test-1658777225971628000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port2767922898/001/created-by-test-removed-by-pod
functional_test_mount_test.go:100: wrote "test-1658777225971628000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port2767922898/001/test-1658777225971628000
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:108: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220725122408-44543 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (462.693882ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 ssh -- ls -la /mount-9p
functional_test_mount_test.go:126: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 25 19:27 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 25 19:27 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 25 19:27 test-1658777225971628000
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 ssh cat /mount-9p/test-1658777225971628000
functional_test_mount_test.go:141: (dbg) Run:  kubectl --context functional-20220725122408-44543 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:342: "busybox-mount" [f9f1e489-c871-4b87-bf48-bcc0f068b174] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [f9f1e489-c871-4b87-bf48-bcc0f068b174] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [f9f1e489-c871-4b87-bf48-bcc0f068b174] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [f9f1e489-c871-4b87-bf48-bcc0f068b174] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.008748702s
functional_test_mount_test.go:162: (dbg) Run:  kubectl --context functional-20220725122408-44543 logs busybox-mount
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 ssh stat /mount-9p/created-by-test

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 ssh stat /mount-9p/created-by-pod

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:87: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20220725122408-44543 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port2767922898/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.53s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:206: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-20220725122408-44543 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port166934739/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220725122408-44543 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (705.820059ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:250: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 ssh -- ls -la /mount-9p
functional_test_mount_test.go:254: guest mount directory contents
total 0
functional_test_mount_test.go:256: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20220725122408-44543 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port166934739/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:257: reading mount text
functional_test_mount_test.go:271: done reading mount text
functional_test_mount_test.go:223: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220725122408-44543 ssh "sudo umount -f /mount-9p": exit status 1 (447.194114ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:225: "out/minikube-darwin-amd64 -p functional-20220725122408-44543 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:227: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20220725122408-44543 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port166934739/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.93s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:491: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20220725122408-44543 docker-env) && out/minikube-darwin-amd64 status -p functional-20220725122408-44543"
functional_test.go:491: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20220725122408-44543 docker-env) && out/minikube-darwin-amd64 status -p functional-20220725122408-44543": (1.105319837s)
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20220725122408-44543 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2182: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2196: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 version -o=json --components
E0725 12:28:06.344530   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/addons-20220725121918-44543/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/Version/components (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 image ls --format short
functional_test.go:261: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220725122408-44543 image ls --format short:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.7
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.24.2
k8s.gcr.io/kube-proxy:v1.24.2
k8s.gcr.io/kube-controller-manager:v1.24.2
k8s.gcr.io/kube-apiserver:v1.24.2
k8s.gcr.io/etcd:3.5.3-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns/coredns:v1.8.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-20220725122408-44543
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-20220725122408-44543
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 image ls --format table
functional_test.go:261: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220725122408-44543 image ls --format table:
|---------------------------------------------|---------------------------------|---------------|--------|
|                    Image                    |               Tag               |   Image ID    |  Size  |
|---------------------------------------------|---------------------------------|---------------|--------|
| k8s.gcr.io/kube-apiserver                   | v1.24.2                         | d3377ffb7177c | 130MB  |
| k8s.gcr.io/pause                            | 3.7                             | 221177c6082a8 | 711kB  |
| gcr.io/google-containers/addon-resizer      | functional-20220725122408-44543 | ffd4cfbbe753e | 32.9MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc                    | 56cc512116c8f | 4.4MB  |
| k8s.gcr.io/etcd                             | 3.5.3-0                         | aebe758cef4cd | 299MB  |
| k8s.gcr.io/pause                            | 3.6                             | 6270bb605e12e | 683kB  |
| docker.io/library/minikube-local-cache-test | functional-20220725122408-44543 | 6e5b5ab54d37a | 30B    |
| docker.io/library/nginx                     | alpine                          | e46bcc6975310 | 23.5MB |
| docker.io/library/nginx                     | latest                          | 670dcc86b69df | 142MB  |
| k8s.gcr.io/kube-scheduler                   | v1.24.2                         | 5d725196c1f47 | 51MB   |
| k8s.gcr.io/kube-controller-manager          | v1.24.2                         | 34cdf99b1bb3b | 119MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>                          | 115053965e86b | 43.8MB |
| k8s.gcr.io/pause                            | 3.3                             | 0184c1613d929 | 683kB  |
| k8s.gcr.io/echoserver                       | 1.8                             | 82e4c8a736a4f | 95.4MB |
| docker.io/library/mysql                     | 5.7                             | 459651132a111 | 429MB  |
| k8s.gcr.io/kube-proxy                       | v1.24.2                         | a634548d10b03 | 110MB  |
| k8s.gcr.io/coredns/coredns                  | v1.8.6                          | a4ca41631cc7a | 46.8MB |
| docker.io/localhost/my-image                | functional-20220725122408-44543 | a5ab4afba96f1 | 1.24MB |
| docker.io/kubernetesui/dashboard            | <none>                          | 1042d9e0d8fcc | 246MB  |
| gcr.io/k8s-minikube/busybox                 | latest                          | beae173ccac6a | 1.24MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                              | 6e38f40d628db | 31.5MB |
| k8s.gcr.io/pause                            | 3.1                             | da86e6ba6ca19 | 742kB  |
| k8s.gcr.io/pause                            | latest                          | 350b164e7ae1d | 240kB  |
|---------------------------------------------|---------------------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 image ls --format json
functional_test.go:261: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220725122408-44543 image ls --format json:
[{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03","repoDigests":[],"repoTags":["k8s.gcr.io/coredns/coredns:v1.8.6"],"size":"46800000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"6e5b5ab54d37a0e6f3f98619f2cfa831ab03686d669f173cfec3f3a55b0a3976","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-20220725122408-44543"],"size":"30"},{"id":"d3377ffb7177cc4becce8a534
d8547aca9530cb30fac9ebe479b31102f1ba503","repoDigests":[],"repoTags":["k8s.gcr.io/kube-apiserver:v1.24.2"],"size":"130000000"},{"id":"459651132a1115239f7370765464a0737d028ae7e74c68360740d81751fbae7e","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"429000000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"683000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"e46bcc69753105cfd75905056666b92cee0d3e96ebf134b19f1b38de53cda93e","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"2
3500000"},{"id":"670dcc86b69df89a9d5a9e1a7ae5b8f67619c1c74e19de8a35f57d6c06505fd4","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-20220725122408-44543"],"size":"32900000"},{"id":"a5ab4afba96f171e53f106a55d7ecf4aeadd2b91bbc184e6f3568eadaf338999","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-20220725122408-44543"],"size":"1240000"},{"id":"34cdf99b1bb3b3a62c5b4226c3bc0983ab1f13e776269d1872092091b07203df","repoDigests":[],"repoTags":["k8s.gcr.io/kube-controller-manager:v1.24.2"],"size":"119000000"},{"id":"1042d9e0d8fcc64f2c6b9ade3af9e8ed255fa04d18d838d0b3650ad7636534a9","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b","repoDigests":[],"repoTags":["k8s.gcr.io/etcd:3.5.3-0"],"size":
"299000000"},{"id":"221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.7"],"size":"711000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"5d725196c1f47e72d2bc7069776d5928b1fb1e4adf09c18997733099aa3663ac","repoDigests":[],"repoTags":["k8s.gcr.io/kube-scheduler:v1.24.2"],"size":"51000000"},{"id":"a634548d10b032c2a1d704ef9a2ab04c12b0574afe67ee192b196a7f12da9536","repoDigests":[],"repoTags":["k8s.gcr.io/kube-proxy:v1.24.2"],"size":"110000000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 image ls --format yaml
functional_test.go:261: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220725122408-44543 image ls --format yaml:
- id: 670dcc86b69df89a9d5a9e1a7ae5b8f67619c1c74e19de8a35f57d6c06505fd4
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: d3377ffb7177cc4becce8a534d8547aca9530cb30fac9ebe479b31102f1ba503
repoDigests: []
repoTags:
- k8s.gcr.io/kube-apiserver:v1.24.2
size: "130000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: 459651132a1115239f7370765464a0737d028ae7e74c68360740d81751fbae7e
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "429000000"
- id: 5d725196c1f47e72d2bc7069776d5928b1fb1e4adf09c18997733099aa3663ac
repoDigests: []
repoTags:
- k8s.gcr.io/kube-scheduler:v1.24.2
size: "51000000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: e46bcc69753105cfd75905056666b92cee0d3e96ebf134b19f1b38de53cda93e
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "23500000"
- id: 34cdf99b1bb3b3a62c5b4226c3bc0983ab1f13e776269d1872092091b07203df
repoDigests: []
repoTags:
- k8s.gcr.io/kube-controller-manager:v1.24.2
size: "119000000"
- id: 1042d9e0d8fcc64f2c6b9ade3af9e8ed255fa04d18d838d0b3650ad7636534a9
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.6
size: "683000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-20220725122408-44543
size: "32900000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 6e5b5ab54d37a0e6f3f98619f2cfa831ab03686d669f173cfec3f3a55b0a3976
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-20220725122408-44543
size: "30"
- id: a634548d10b032c2a1d704ef9a2ab04c12b0574afe67ee192b196a7f12da9536
repoDigests: []
repoTags:
- k8s.gcr.io/kube-proxy:v1.24.2
size: "110000000"
- id: aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b
repoDigests: []
repoTags:
- k8s.gcr.io/etcd:3.5.3-0
size: "299000000"
- id: 221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.7
size: "711000"
- id: a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03
repoDigests: []
repoTags:
- k8s.gcr.io/coredns/coredns:v1.8.6
size: "46800000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 ssh pgrep buildkitd
functional_test.go:303: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220725122408-44543 ssh pgrep buildkitd: exit status 1 (425.748789ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:310: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 image build -t localhost/my-image:functional-20220725122408-44543 testdata/build
functional_test.go:310: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220725122408-44543 image build -t localhost/my-image:functional-20220725122408-44543 testdata/build: (2.322187446s)
functional_test.go:315: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220725122408-44543 image build -t localhost/my-image:functional-20220725122408-44543 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 46c42dee01c1
Removing intermediate container 46c42dee01c1
---> 09867bc66e38
Step 3/3 : ADD content.txt /
---> a5ab4afba96f
Successfully built a5ab4afba96f
Successfully tagged localhost/my-image:functional-20220725122408-44543
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:337: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.843786985s)
functional_test.go:342: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20220725122408-44543
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220725122408-44543
E0725 12:27:25.383057   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/addons-20220725121918-44543/client.crt: no such file or directory
functional_test.go:350: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220725122408-44543 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220725122408-44543: (3.889209499s)
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220725122408-44543
functional_test.go:360: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220725122408-44543 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220725122408-44543: (2.530494633s)
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:230: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:230: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.708769764s)
functional_test.go:235: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20220725122408-44543
functional_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220725122408-44543
functional_test.go:240: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220725122408-44543 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220725122408-44543: (2.802591694s)
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:375: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 image save gcr.io/google-containers/addon-resizer:functional-20220725122408-44543 /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:375: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220725122408-44543 image save gcr.io/google-containers/addon-resizer:functional-20220725122408-44543 /Users/jenkins/workspace/addon-resizer-save.tar: (1.307372795s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:387: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 image rm gcr.io/google-containers/addon-resizer:functional-20220725122408-44543
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 image load /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:404: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220725122408-44543 image load /Users/jenkins/workspace/addon-resizer-save.tar: (1.361682747s)
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:414: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20220725122408-44543
functional_test.go:419: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220725122408-44543
functional_test.go:419: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220725122408-44543 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220725122408-44543: (2.397049137s)
functional_test.go:424: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-20220725122408-44543
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.53s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 update-context --alsologtostderr -v=2
E0725 12:29:28.266722   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/addons-20220725121918-44543/client.crt: no such file or directory
E0725 12:31:44.422777   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/addons-20220725121918-44543/client.crt: no such file or directory
E0725 12:32:12.110715   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/addons-20220725121918-44543/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220725122408-44543 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.30s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.17s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220725122408-44543
--- PASS: TestFunctional/delete_addon-resizer_images (0.17s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:193: (dbg) Run:  docker rmi -f localhost/my-image:functional-20220725122408-44543
--- PASS: TestFunctional/delete_my-image_image (0.07s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:201: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20220725122408-44543
--- PASS: TestFunctional/delete_minikube_cached_images (0.07s)

                                                
                                    
x
+
TestJSONOutput/start/Command (43.73s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-20220725123941-44543 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-20220725123941-44543 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (43.734184296s)
--- PASS: TestJSONOutput/start/Command (43.73s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-20220725123941-44543 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-20220725123941-44543 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.37s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-20220725123941-44543 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-20220725123941-44543 --output=json --user=testUser: (12.368709804s)
--- PASS: TestJSONOutput/stop/Command (12.37s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.77s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-20220725124040-44543 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-20220725124040-44543 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (334.369335ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"26c4a6a8-9b5c-432f-a1e2-0b79e4655c05","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20220725124040-44543] minikube v1.26.0 on Darwin 12.4","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e7c45d3a-ed73-42ae-8932-35548c3ce628","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14555"}}
	{"specversion":"1.0","id":"d004c073-0932-408c-b18a-693981b6c6d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig"}}
	{"specversion":"1.0","id":"1f22daf9-1c5d-4120-810a-cd298fe25739","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"a5366ebf-7a40-40e1-83a6-4543460e4cb0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ea68b637-9afd-4e63-9006-c6a29cc3b2d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube"}}
	{"specversion":"1.0","id":"c4406e82-e135-44cb-8ded-5fcfd14be2ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-20220725124040-44543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-20220725124040-44543
--- PASS: TestErrorJSONOutput (0.77s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (30.2s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-20220725124041-44543 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-20220725124041-44543 --network=: (27.414531993s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20220725124041-44543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-20220725124041-44543
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-20220725124041-44543: (2.715979457s)
--- PASS: TestKicCustomNetwork/create_custom_network (30.20s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (29.93s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-20220725124111-44543 --network=bridge
E0725 12:41:38.163962   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/functional-20220725122408-44543/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-20220725124111-44543 --network=bridge: (27.318997386s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20220725124111-44543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-20220725124111-44543
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-20220725124111-44543: (2.547287852s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (29.93s)

                                                
                                    
x
+
TestKicExistingNetwork (29.66s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-20220725124142-44543 --network=existing-network
E0725 12:41:44.434719   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/addons-20220725121918-44543/client.crt: no such file or directory
E0725 12:42:05.861510   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/functional-20220725122408-44543/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-20220725124142-44543 --network=existing-network: (26.701796717s)
helpers_test.go:175: Cleaning up "existing-network-20220725124142-44543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-20220725124142-44543
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-20220725124142-44543: (2.550203002s)
--- PASS: TestKicExistingNetwork (29.66s)

                                                
                                    
x
+
TestKicCustomSubnet (29.36s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-20220725124211-44543 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-20220725124211-44543 --subnet=192.168.60.0/24: (26.565192118s)
kic_custom_network_test.go:133: (dbg) Run:  docker network inspect custom-subnet-20220725124211-44543 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-20220725124211-44543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-20220725124211-44543
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-20220725124211-44543: (2.727472598s)
--- PASS: TestKicCustomSubnet (29.36s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (62.75s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-20220725124240-44543 --driver=docker 
E0725 12:43:07.485971   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/addons-20220725121918-44543/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-20220725124240-44543 --driver=docker : (27.176138108s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-20220725124240-44543 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-20220725124240-44543 --driver=docker : (27.98732307s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-20220725124240-44543
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-20220725124240-44543
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-20220725124240-44543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-20220725124240-44543
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-20220725124240-44543: (2.734051374s)
helpers_test.go:175: Cleaning up "first-20220725124240-44543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-20220725124240-44543
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-20220725124240-44543: (2.756383135s)
--- PASS: TestMinikubeProfile (62.75s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.69s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-20220725124343-44543 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-20220725124343-44543 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (6.688125056s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.44s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-20220725124343-44543 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.44s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.59s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-20220725124343-44543 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-20220725124343-44543 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (6.590323304s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.45s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20220725124343-44543 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.45s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.32s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-20220725124343-44543 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-20220725124343-44543 --alsologtostderr -v=5: (2.319859002s)
--- PASS: TestMountStart/serial/DeleteFirst (2.32s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20220725124343-44543 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.43s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.63s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-20220725124343-44543
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-20220725124343-44543: (1.627409389s)
--- PASS: TestMountStart/serial/Stop (1.63s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (5.4s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-20220725124343-44543
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-20220725124343-44543: (4.404379968s)
--- PASS: TestMountStart/serial/RestartStopped (5.40s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.44s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20220725124343-44543 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.44s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (99.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220725124412-44543 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
multinode_test.go:83: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220725124412-44543 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : (1m38.330905401s)
multinode_test.go:89: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725124412-44543 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (99.10s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220725124412-44543 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:479: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-20220725124412-44543 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: (1.713477362s)
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220725124412-44543 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-20220725124412-44543 -- rollout status deployment/busybox: (2.567443424s)
multinode_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220725124412-44543 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220725124412-44543 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220725124412-44543 -- exec busybox-d46db594c-prc78 -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220725124412-44543 -- exec busybox-d46db594c-rcxbt -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220725124412-44543 -- exec busybox-d46db594c-prc78 -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220725124412-44543 -- exec busybox-d46db594c-rcxbt -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220725124412-44543 -- exec busybox-d46db594c-prc78 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220725124412-44543 -- exec busybox-d46db594c-rcxbt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.67s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220725124412-44543 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220725124412-44543 -- exec busybox-d46db594c-prc78 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220725124412-44543 -- exec busybox-d46db594c-prc78 -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220725124412-44543 -- exec busybox-d46db594c-rcxbt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220725124412-44543 -- exec busybox-d46db594c-rcxbt -- sh -c "ping -c 1 192.168.65.2"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.86s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (34.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-20220725124412-44543 -v 3 --alsologtostderr
multinode_test.go:108: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-20220725124412-44543 -v 3 --alsologtostderr: (33.031752043s)
multinode_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725124412-44543 status --alsologtostderr
multinode_test.go:114: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220725124412-44543 status --alsologtostderr: (1.142584109s)
--- PASS: TestMultiNode/serial/AddNode (34.17s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.57s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (16.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725124412-44543 status --output json --alsologtostderr
multinode_test.go:171: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220725124412-44543 status --output json --alsologtostderr: (1.122400236s)
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725124412-44543 cp testdata/cp-test.txt multinode-20220725124412-44543:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725124412-44543 ssh -n multinode-20220725124412-44543 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725124412-44543 cp multinode-20220725124412-44543:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile1403694039/001/cp-test_multinode-20220725124412-44543.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725124412-44543 ssh -n multinode-20220725124412-44543 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725124412-44543 cp multinode-20220725124412-44543:/home/docker/cp-test.txt multinode-20220725124412-44543-m02:/home/docker/cp-test_multinode-20220725124412-44543_multinode-20220725124412-44543-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725124412-44543 ssh -n multinode-20220725124412-44543 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725124412-44543 ssh -n multinode-20220725124412-44543-m02 "sudo cat /home/docker/cp-test_multinode-20220725124412-44543_multinode-20220725124412-44543-m02.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725124412-44543 cp multinode-20220725124412-44543:/home/docker/cp-test.txt multinode-20220725124412-44543-m03:/home/docker/cp-test_multinode-20220725124412-44543_multinode-20220725124412-44543-m03.txt
E0725 12:46:38.170070   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/functional-20220725122408-44543/client.crt: no such file or directory
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725124412-44543 ssh -n multinode-20220725124412-44543 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725124412-44543 ssh -n multinode-20220725124412-44543-m03 "sudo cat /home/docker/cp-test_multinode-20220725124412-44543_multinode-20220725124412-44543-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725124412-44543 cp testdata/cp-test.txt multinode-20220725124412-44543-m02:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725124412-44543 ssh -n multinode-20220725124412-44543-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725124412-44543 cp multinode-20220725124412-44543-m02:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile1403694039/001/cp-test_multinode-20220725124412-44543-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725124412-44543 ssh -n multinode-20220725124412-44543-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725124412-44543 cp multinode-20220725124412-44543-m02:/home/docker/cp-test.txt multinode-20220725124412-44543:/home/docker/cp-test_multinode-20220725124412-44543-m02_multinode-20220725124412-44543.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725124412-44543 ssh -n multinode-20220725124412-44543-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725124412-44543 ssh -n multinode-20220725124412-44543 "sudo cat /home/docker/cp-test_multinode-20220725124412-44543-m02_multinode-20220725124412-44543.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725124412-44543 cp multinode-20220725124412-44543-m02:/home/docker/cp-test.txt multinode-20220725124412-44543-m03:/home/docker/cp-test_multinode-20220725124412-44543-m02_multinode-20220725124412-44543-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725124412-44543 ssh -n multinode-20220725124412-44543-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725124412-44543 ssh -n multinode-20220725124412-44543-m03 "sudo cat /home/docker/cp-test_multinode-20220725124412-44543-m02_multinode-20220725124412-44543-m03.txt"
E0725 12:46:44.442513   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/addons-20220725121918-44543/client.crt: no such file or directory
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725124412-44543 cp testdata/cp-test.txt multinode-20220725124412-44543-m03:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725124412-44543 ssh -n multinode-20220725124412-44543-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725124412-44543 cp multinode-20220725124412-44543-m03:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile1403694039/001/cp-test_multinode-20220725124412-44543-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725124412-44543 ssh -n multinode-20220725124412-44543-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725124412-44543 cp multinode-20220725124412-44543-m03:/home/docker/cp-test.txt multinode-20220725124412-44543:/home/docker/cp-test_multinode-20220725124412-44543-m03_multinode-20220725124412-44543.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725124412-44543 ssh -n multinode-20220725124412-44543-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725124412-44543 ssh -n multinode-20220725124412-44543 "sudo cat /home/docker/cp-test_multinode-20220725124412-44543-m03_multinode-20220725124412-44543.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725124412-44543 cp multinode-20220725124412-44543-m03:/home/docker/cp-test.txt multinode-20220725124412-44543-m02:/home/docker/cp-test_multinode-20220725124412-44543-m03_multinode-20220725124412-44543-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725124412-44543 ssh -n multinode-20220725124412-44543-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725124412-44543 ssh -n multinode-20220725124412-44543-m02 "sudo cat /home/docker/cp-test_multinode-20220725124412-44543-m03_multinode-20220725124412-44543-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (16.90s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (14.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725124412-44543 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220725124412-44543 node stop m03: (12.455086663s)
multinode_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725124412-44543 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220725124412-44543 status: exit status 7 (838.708236ms)

                                                
                                                
-- stdout --
	multinode-20220725124412-44543
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220725124412-44543-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220725124412-44543-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725124412-44543 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220725124412-44543 status --alsologtostderr: exit status 7 (847.59612ms)

                                                
                                                
-- stdout --
	multinode-20220725124412-44543
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220725124412-44543-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220725124412-44543-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 12:47:03.338102   52039 out.go:296] Setting OutFile to fd 1 ...
	I0725 12:47:03.338297   52039 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 12:47:03.338302   52039 out.go:309] Setting ErrFile to fd 2...
	I0725 12:47:03.338305   52039 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 12:47:03.338409   52039 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/bin
	I0725 12:47:03.338581   52039 out.go:303] Setting JSON to false
	I0725 12:47:03.338595   52039 mustload.go:65] Loading cluster: multinode-20220725124412-44543
	I0725 12:47:03.338880   52039 config.go:178] Loaded profile config "multinode-20220725124412-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0725 12:47:03.338892   52039 status.go:253] checking status of multinode-20220725124412-44543 ...
	I0725 12:47:03.339281   52039 cli_runner.go:164] Run: docker container inspect multinode-20220725124412-44543 --format={{.State.Status}}
	I0725 12:47:03.411144   52039 status.go:328] multinode-20220725124412-44543 host status = "Running" (err=<nil>)
	I0725 12:47:03.411193   52039 host.go:66] Checking if "multinode-20220725124412-44543" exists ...
	I0725 12:47:03.411465   52039 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220725124412-44543
	I0725 12:47:03.483540   52039 host.go:66] Checking if "multinode-20220725124412-44543" exists ...
	I0725 12:47:03.483817   52039 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 12:47:03.483869   52039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220725124412-44543
	I0725 12:47:03.555155   52039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50969 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/multinode-20220725124412-44543/id_rsa Username:docker}
	I0725 12:47:03.641704   52039 ssh_runner.go:195] Run: systemctl --version
	I0725 12:47:03.646127   52039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 12:47:03.655545   52039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220725124412-44543
	I0725 12:47:03.727361   52039 kubeconfig.go:92] found "multinode-20220725124412-44543" server: "https://127.0.0.1:50973"
	I0725 12:47:03.727386   52039 api_server.go:165] Checking apiserver status ...
	I0725 12:47:03.727422   52039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 12:47:03.737160   52039 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1668/cgroup
	W0725 12:47:03.744829   52039 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1668/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0725 12:47:03.744856   52039 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:50973/healthz ...
	I0725 12:47:03.751008   52039 api_server.go:266] https://127.0.0.1:50973/healthz returned 200:
	ok
	I0725 12:47:03.751022   52039 status.go:419] multinode-20220725124412-44543 apiserver status = Running (err=<nil>)
	I0725 12:47:03.751030   52039 status.go:255] multinode-20220725124412-44543 status: &{Name:multinode-20220725124412-44543 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0725 12:47:03.751044   52039 status.go:253] checking status of multinode-20220725124412-44543-m02 ...
	I0725 12:47:03.751271   52039 cli_runner.go:164] Run: docker container inspect multinode-20220725124412-44543-m02 --format={{.State.Status}}
	I0725 12:47:03.822935   52039 status.go:328] multinode-20220725124412-44543-m02 host status = "Running" (err=<nil>)
	I0725 12:47:03.822958   52039 host.go:66] Checking if "multinode-20220725124412-44543-m02" exists ...
	I0725 12:47:03.823211   52039 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220725124412-44543-m02
	I0725 12:47:03.896276   52039 host.go:66] Checking if "multinode-20220725124412-44543-m02" exists ...
	I0725 12:47:03.896519   52039 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 12:47:03.896561   52039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220725124412-44543-m02
	I0725 12:47:03.966883   52039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51099 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/multinode-20220725124412-44543-m02/id_rsa Username:docker}
	I0725 12:47:04.053917   52039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 12:47:04.063014   52039 status.go:255] multinode-20220725124412-44543-m02 status: &{Name:multinode-20220725124412-44543-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0725 12:47:04.063037   52039 status.go:253] checking status of multinode-20220725124412-44543-m03 ...
	I0725 12:47:04.063323   52039 cli_runner.go:164] Run: docker container inspect multinode-20220725124412-44543-m03 --format={{.State.Status}}
	I0725 12:47:04.134271   52039 status.go:328] multinode-20220725124412-44543-m03 host status = "Stopped" (err=<nil>)
	I0725 12:47:04.134292   52039 status.go:341] host is not running, skipping remaining checks
	I0725 12:47:04.134302   52039 status.go:255] multinode-20220725124412-44543-m03 status: &{Name:multinode-20220725124412-44543-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (14.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (19.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725124412-44543 node start m03 --alsologtostderr
multinode_test.go:252: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220725124412-44543 node start m03 --alsologtostderr: (18.641935706s)
multinode_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725124412-44543 status
multinode_test.go:259: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220725124412-44543 status: (1.114918498s)
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (19.87s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (131.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20220725124412-44543
multinode_test.go:288: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-20220725124412-44543
multinode_test.go:288: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-20220725124412-44543: (36.959122719s)
multinode_test.go:293: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220725124412-44543 --wait=true -v=8 --alsologtostderr
multinode_test.go:293: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220725124412-44543 --wait=true -v=8 --alsologtostderr: (1m34.59293388s)
multinode_test.go:298: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20220725124412-44543
--- PASS: TestMultiNode/serial/RestartKeepsNodes (131.66s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (18.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725124412-44543 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220725124412-44543 node delete m03: (16.338328776s)
multinode_test.go:398: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725124412-44543 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:422: (dbg) Done: kubectl get nodes: (1.447735938s)
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (18.66s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725124412-44543 stop
multinode_test.go:312: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220725124412-44543 stop: (24.693449877s)
multinode_test.go:318: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725124412-44543 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220725124412-44543 status: exit status 7 (180.808621ms)

                                                
                                                
-- stdout --
	multinode-20220725124412-44543
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220725124412-44543-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725124412-44543 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220725124412-44543 status --alsologtostderr: exit status 7 (178.556603ms)

                                                
                                                
-- stdout --
	multinode-20220725124412-44543
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220725124412-44543-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 12:50:19.233350   52677 out.go:296] Setting OutFile to fd 1 ...
	I0725 12:50:19.233540   52677 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 12:50:19.233545   52677 out.go:309] Setting ErrFile to fd 2...
	I0725 12:50:19.233549   52677 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0725 12:50:19.233655   52677 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/bin
	I0725 12:50:19.233817   52677 out.go:303] Setting JSON to false
	I0725 12:50:19.233832   52677 mustload.go:65] Loading cluster: multinode-20220725124412-44543
	I0725 12:50:19.234124   52677 config.go:178] Loaded profile config "multinode-20220725124412-44543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0725 12:50:19.234133   52677 status.go:253] checking status of multinode-20220725124412-44543 ...
	I0725 12:50:19.234484   52677 cli_runner.go:164] Run: docker container inspect multinode-20220725124412-44543 --format={{.State.Status}}
	I0725 12:50:19.297958   52677 status.go:328] multinode-20220725124412-44543 host status = "Stopped" (err=<nil>)
	I0725 12:50:19.297979   52677 status.go:341] host is not running, skipping remaining checks
	I0725 12:50:19.297985   52677 status.go:255] multinode-20220725124412-44543 status: &{Name:multinode-20220725124412-44543 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0725 12:50:19.298009   52677 status.go:253] checking status of multinode-20220725124412-44543-m02 ...
	I0725 12:50:19.298423   52677 cli_runner.go:164] Run: docker container inspect multinode-20220725124412-44543-m02 --format={{.State.Status}}
	I0725 12:50:19.362027   52677 status.go:328] multinode-20220725124412-44543-m02 host status = "Stopped" (err=<nil>)
	I0725 12:50:19.362051   52677 status.go:341] host is not running, skipping remaining checks
	I0725 12:50:19.362060   52677 status.go:255] multinode-20220725124412-44543-m02 status: &{Name:multinode-20220725124412-44543-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.05s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (58.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220725124412-44543 --wait=true -v=8 --alsologtostderr --driver=docker 
multinode_test.go:352: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220725124412-44543 --wait=true -v=8 --alsologtostderr --driver=docker : (56.397536248s)
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220725124412-44543 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:372: (dbg) Done: kubectl get nodes: (1.623162103s)
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (58.92s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (31.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20220725124412-44543
multinode_test.go:450: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220725124412-44543-m02 --driver=docker 
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-20220725124412-44543-m02 --driver=docker : exit status 14 (398.214989ms)

                                                
                                                
-- stdout --
	* [multinode-20220725124412-44543-m02] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14555
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20220725124412-44543-m02' is duplicated with machine name 'multinode-20220725124412-44543-m02' in profile 'multinode-20220725124412-44543'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220725124412-44543-m03 --driver=docker 
E0725 12:51:38.152266   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/functional-20220725122408-44543/client.crt: no such file or directory
E0725 12:51:44.424035   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/addons-20220725121918-44543/client.crt: no such file or directory
multinode_test.go:458: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220725124412-44543-m03 --driver=docker : (28.094327189s)
multinode_test.go:465: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-20220725124412-44543
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-20220725124412-44543: exit status 80 (530.780891ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20220725124412-44543
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20220725124412-44543-m03 already exists in multinode-20220725124412-44543-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-20220725124412-44543-m03
multinode_test.go:470: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-20220725124412-44543-m03: (2.789601892s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (31.87s)

                                                
                                    
x
+
TestScheduledStopUnix (101.76s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-20220725125621-44543 --memory=2048 --driver=docker 
E0725 12:56:38.160144   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/functional-20220725122408-44543/client.crt: no such file or directory
E0725 12:56:44.430339   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/addons-20220725121918-44543/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-20220725125621-44543 --memory=2048 --driver=docker : (27.294764226s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220725125621-44543 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20220725125621-44543 -n scheduled-stop-20220725125621-44543
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220725125621-44543 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220725125621-44543 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20220725125621-44543 -n scheduled-stop-20220725125621-44543
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-20220725125621-44543
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220725125621-44543 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-20220725125621-44543
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-20220725125621-44543: exit status 7 (121.672674ms)

                                                
                                                
-- stdout --
	scheduled-stop-20220725125621-44543
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20220725125621-44543 -n scheduled-stop-20220725125621-44543
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20220725125621-44543 -n scheduled-stop-20220725125621-44543: exit status 7 (118.287876ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-20220725125621-44543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-20220725125621-44543
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-20220725125621-44543: (2.41069272s)
--- PASS: TestScheduledStopUnix (101.76s)

                                                
                                    
x
+
TestSkaffold (65.81s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe3775618635 version
skaffold_test.go:63: skaffold version: v1.39.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-20220725125803-44543 --memory=2600 --driver=docker 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-20220725125803-44543 --memory=2600 --driver=docker : (26.668873147s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:110: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe3775618635 run --minikube-profile skaffold-20220725125803-44543 --kube-context skaffold-20220725125803-44543 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:110: (dbg) Done: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe3775618635 run --minikube-profile skaffold-20220725125803-44543 --kube-context skaffold-20220725125803-44543 --status-check=true --port-forward=false --interactive=false: (24.403391599s)
skaffold_test.go:116: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:342: "leeroy-app-54f44f4fdb-ms9cf" [326e2488-65f6-4cdc-bab7-d33699355b91] Running
skaffold_test.go:116: (dbg) TestSkaffold: app=leeroy-app healthy within 5.013066995s
skaffold_test.go:119: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:342: "leeroy-web-c5bd8847-lvl5h" [68245c0a-a137-4c0c-a793-f60836ddcfa4] Running
skaffold_test.go:119: (dbg) TestSkaffold: app=leeroy-web healthy within 5.009783867s
helpers_test.go:175: Cleaning up "skaffold-20220725125803-44543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-20220725125803-44543
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-20220725125803-44543: (3.038992568s)
--- PASS: TestSkaffold (65.81s)

                                                
                                    
x
+
TestInsufficientStorage (13.08s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-20220725125909-44543 --memory=2048 --output=json --wait=true --driver=docker 
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-20220725125909-44543 --memory=2048 --output=json --wait=true --driver=docker : exit status 26 (9.719474921s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f165efdd-2612-43e8-9fd9-334c77d9f843","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20220725125909-44543] minikube v1.26.0 on Darwin 12.4","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7de4ca7c-207a-45d3-ba51-5366589f853b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14555"}}
	{"specversion":"1.0","id":"72f2002f-1d50-4898-99aa-685410fc5c01","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig"}}
	{"specversion":"1.0","id":"2f9715b4-f2c9-4733-942a-cff7d2298a82","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"64bc9d67-b5d8-497e-b91f-14b280d460ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b9bad0d5-0a85-4818-9994-a7155ef403cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube"}}
	{"specversion":"1.0","id":"5bc8969f-49e2-44f9-8df6-4a9bce02e74a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"cbf5891b-b223-4b9e-8d18-d32aebd66fa6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"886bea8b-9ad2-40d8-b70c-cf97723b1cd2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"57be22c1-ac3e-43d8-8223-1814472da455","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"20520eb8-720b-4594-a6d5-f56d7633314f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20220725125909-44543 in cluster insufficient-storage-20220725125909-44543","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"89887e27-3b43-4656-9f17-22e941e1674d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"63a13b68-6399-41ec-bdaa-dfec1f393d6d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"5eea077d-ffc2-437d-b8d8-c4be52520aae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-20220725125909-44543 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-20220725125909-44543 --output=json --layout=cluster: exit status 7 (428.74584ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220725125909-44543","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.26.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220725125909-44543","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 12:59:19.210259   54233 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220725125909-44543" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-20220725125909-44543 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-20220725125909-44543 --output=json --layout=cluster: exit status 7 (427.212592ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220725125909-44543","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.26.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220725125909-44543","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 12:59:19.638363   54243 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220725125909-44543" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	E0725 12:59:19.646555   54243 status.go:557] unable to read event log: stat: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/insufficient-storage-20220725125909-44543/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-20220725125909-44543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-20220725125909-44543
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p insufficient-storage-20220725125909-44543: (2.504843149s)
--- PASS: TestInsufficientStorage (13.08s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (7.61s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.26.0 on darwin
- MINIKUBE_LOCATION=14555
- KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current637113021/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current637113021/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current637113021/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current637113021/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (7.61s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (9.96s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.26.0 on darwin
- MINIKUBE_LOCATION=14555
- KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2909889848/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2909889848/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2909889848/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2909889848/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (9.96s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.38s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.38s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.56s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-20220725130445-44543
version_upgrade_test.go:213: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-20220725130445-44543: (3.559630681s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.56s)

                                                
                                    
x
+
TestPause/serial/Start (44.37s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-20220725130540-44543 --memory=2048 --install-addons=false --wait=all --driver=docker 
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-20220725130540-44543 --memory=2048 --install-addons=false --wait=all --driver=docker : (44.368320451s)
--- PASS: TestPause/serial/Start (44.37s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (68.86s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-20220725130540-44543 --alsologtostderr -v=1 --driver=docker 
E0725 13:06:38.144532   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/functional-20220725122408-44543/client.crt: no such file or directory
E0725 13:06:39.841825   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/skaffold-20220725125803-44543/client.crt: no such file or directory
E0725 13:06:44.417635   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/addons-20220725121918-44543/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-20220725130540-44543 --alsologtostderr -v=1 --driver=docker : (1m8.850786471s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (68.86s)

                                                
                                    
x
+
TestPause/serial/Pause (0.75s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-20220725130540-44543 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220725130838-44543 --no-kubernetes --kubernetes-version=1.20 --driver=docker 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-20220725130838-44543 --no-kubernetes --kubernetes-version=1.20 --driver=docker : exit status 14 (365.277336ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220725130838-44543] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14555
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (27.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220725130838-44543 --driver=docker 
E0725 13:08:55.991754   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/skaffold-20220725125803-44543/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220725130838-44543 --driver=docker : (27.139421981s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-20220725130838-44543 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (27.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220725130838-44543 --no-kubernetes --driver=docker 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220725130838-44543 --no-kubernetes --driver=docker : (14.305327959s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-20220725130838-44543 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-20220725130838-44543 status -o json: exit status 2 (447.559501ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-20220725130838-44543","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-20220725130838-44543
E0725 13:09:23.685298   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/skaffold-20220725125803-44543/client.crt: no such file or directory
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-20220725130838-44543: (2.513458703s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220725130838-44543 --no-kubernetes --driver=docker 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220725130838-44543 --no-kubernetes --driver=docker : (6.810293384s)
--- PASS: TestNoKubernetes/serial/Start (6.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-20220725130838-44543 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-20220725130838-44543 "sudo systemctl is-active --quiet service kubelet": exit status 1 (463.332292ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-20220725130838-44543
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-20220725130838-44543: (1.643799229s)
--- PASS: TestNoKubernetes/serial/Stop (1.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (4.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220725130838-44543 --driver=docker 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220725130838-44543 --driver=docker : (4.32406821s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (4.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-20220725130838-44543 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-20220725130838-44543 "sudo systemctl is-active --quiet service kubelet": exit status 1 (426.692841ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (45.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-20220725125922-44543 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker 
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p auto-20220725125922-44543 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker : (45.622657604s)
--- PASS: TestNetworkPlugins/group/auto/Start (45.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-20220725125922-44543 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context auto-20220725125922-44543 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context auto-20220725125922-44543 replace --force -f testdata/netcat-deployment.yaml: (1.601212779s)
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-k759k" [69b0dbe0-09a2-421b-ab51-a4329923df55] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-869c55b6dc-k759k" [69b0dbe0-09a2-421b-ab51-a4329923df55] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.009930544s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220725125922-44543 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:188: (dbg) Run:  kubectl --context auto-20220725125922-44543 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (5.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:238: (dbg) Run:  kubectl --context auto-20220725125922-44543 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context auto-20220725125922-44543 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.11606924s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/auto/HairPin (5.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (48.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-20220725125922-44543 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker 
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-20220725125922-44543 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker : (48.780330239s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (48.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:342: "kindnet-z8r7f" [6d17547b-633f-403d-bc57-76fc652b07d6] Running
E0725 13:11:38.150973   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/functional-20220725122408-44543/client.crt: no such file or directory
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.014776232s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-20220725125922-44543 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kindnet-20220725125922-44543 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context kindnet-20220725125922-44543 replace --force -f testdata/netcat-deployment.yaml: (1.66243134s)
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-56w8d" [a984ae60-84d9-421c-941f-798ab9020746] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0725 13:11:44.421638   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/addons-20220725121918-44543/client.crt: no such file or directory
helpers_test.go:342: "netcat-869c55b6dc-56w8d" [a984ae60-84d9-421c-941f-798ab9020746] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.009561105s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-20220725125922-44543 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kindnet-20220725125922-44543 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kindnet-20220725125922-44543 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (75.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p cilium-20220725125923-44543 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p cilium-20220725125923-44543 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker : (1m15.356721066s)
--- PASS: TestNetworkPlugins/group/cilium/Start (75.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (71.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-20220725125923-44543 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p calico-20220725125923-44543 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker : (1m11.946201833s)
--- PASS: TestNetworkPlugins/group/calico/Start (71.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:342: "cilium-c2t4r" [a5bedd35-b118-4557-ab2c-748f654ad3bc] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.016372578s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cilium-20220725125923-44543 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context cilium-20220725125923-44543 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context cilium-20220725125923-44543 replace --force -f testdata/netcat-deployment.yaml: (2.263110816s)
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-xpc5g" [bea189d1-f8e2-48a5-9aeb-adef2df35c3f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-869c55b6dc-xpc5g" [bea189d1-f8e2-48a5-9aeb-adef2df35c3f] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 9.009716586s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:169: (dbg) Run:  kubectl --context cilium-20220725125923-44543 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:188: (dbg) Run:  kubectl --context cilium-20220725125923-44543 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:238: (dbg) Run:  kubectl --context cilium-20220725125923-44543 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (45.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p false-20220725125922-44543 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker 
E0725 13:13:55.997458   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/skaffold-20220725125803-44543/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p false-20220725125922-44543 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker : (45.606819729s)
--- PASS: TestNetworkPlugins/group/false/Start (45.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:342: "calico-node-6jwzq" [391a725b-f828-4b90-986c-51148eceb84a] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.015613152s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-20220725125923-44543 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context calico-20220725125923-44543 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context calico-20220725125923-44543 replace --force -f testdata/netcat-deployment.yaml: (1.713989102s)
net_test.go:152: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-4d84x" [4c987615-190f-40a6-a805-f7da3832d95d] Pending

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
helpers_test.go:342: "netcat-869c55b6dc-4d84x" [4c987615-190f-40a6-a805-f7da3832d95d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
helpers_test.go:342: "netcat-869c55b6dc-4d84x" [4c987615-190f-40a6-a805-f7da3832d95d] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.008850629s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-20220725125922-44543 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context false-20220725125922-44543 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context false-20220725125922-44543 replace --force -f testdata/netcat-deployment.yaml: (1.74538284s)
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-xr95t" [b3586eb1-afc7-4f56-83e1-e7cc4436975d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/NetCatPod
helpers_test.go:342: "netcat-869c55b6dc-xr95t" [b3586eb1-afc7-4f56-83e1-e7cc4436975d] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.00855507s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:169: (dbg) Run:  kubectl --context calico-20220725125923-44543 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:188: (dbg) Run:  kubectl --context calico-20220725125923-44543 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:238: (dbg) Run:  kubectl --context calico-20220725125923-44543 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-20220725125922-44543 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:188: (dbg) Run:  kubectl --context false-20220725125922-44543 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (5.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:238: (dbg) Run:  kubectl --context false-20220725125922-44543 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/HairPin
net_test.go:238: (dbg) Non-zero exit: kubectl --context false-20220725125922-44543 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.114201938s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/false/HairPin (5.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (55.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-20220725125922-44543 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-20220725125922-44543 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker : (55.460915284s)
--- PASS: TestNetworkPlugins/group/bridge/Start (55.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (46.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-20220725125922-44543 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker 
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-20220725125922-44543 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker : (46.169706356s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (46.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-20220725125922-44543 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (41.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context enable-default-cni-20220725125922-44543 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:138: (dbg) Done: kubectl --context enable-default-cni-20220725125922-44543 replace --force -f testdata/netcat-deployment.yaml: (1.714099404s)
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-whql8" [c5c4b46d-2d40-4a8c-9013-41e29f577109] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
helpers_test.go:342: "netcat-869c55b6dc-whql8" [c5c4b46d-2d40-4a8c-9013-41e29f577109] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 40.00820573s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (41.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-20220725125922-44543 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context bridge-20220725125922-44543 replace --force -f testdata/netcat-deployment.yaml
E0725 13:15:28.964339   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/auto-20220725125922-44543/client.crt: no such file or directory
E0725 13:15:28.969485   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/auto-20220725125922-44543/client.crt: no such file or directory
E0725 13:15:28.980477   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/auto-20220725125922-44543/client.crt: no such file or directory
E0725 13:15:29.000639   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/auto-20220725125922-44543/client.crt: no such file or directory
E0725 13:15:29.040856   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/auto-20220725125922-44543/client.crt: no such file or directory
E0725 13:15:29.120953   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/auto-20220725125922-44543/client.crt: no such file or directory
E0725 13:15:29.281562   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/auto-20220725125922-44543/client.crt: no such file or directory
net_test.go:138: (dbg) Done: kubectl --context bridge-20220725125922-44543 replace --force -f testdata/netcat-deployment.yaml: (1.888065037s)
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-t9pw2" [3308c153-6736-411d-93d4-47b738ec65f5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0725 13:15:29.601764   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/auto-20220725125922-44543/client.crt: no such file or directory
E0725 13:15:30.241912   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/auto-20220725125922-44543/client.crt: no such file or directory
E0725 13:15:31.552728   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/auto-20220725125922-44543/client.crt: no such file or directory
E0725 13:15:34.113929   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/auto-20220725125922-44543/client.crt: no such file or directory
helpers_test.go:342: "netcat-869c55b6dc-t9pw2" [3308c153-6736-411d-93d4-47b738ec65f5] Running
E0725 13:15:39.234630   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/auto-20220725125922-44543/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.009451329s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220725125922-44543 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:188: (dbg) Run:  kubectl --context bridge-20220725125922-44543 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:238: (dbg) Run:  kubectl --context bridge-20220725125922-44543 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (46.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-20220725125922-44543 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker 
E0725 13:15:49.475833   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/auto-20220725125922-44543/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-20220725125922-44543 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker : (46.046784384s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (46.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220725125922-44543 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:188: (dbg) Run:  kubectl --context enable-default-cni-20220725125922-44543 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:238: (dbg) Run:  kubectl --context enable-default-cni-20220725125922-44543 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-20220725125922-44543 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kubenet-20220725125922-44543 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context kubenet-20220725125922-44543 replace --force -f testdata/netcat-deployment.yaml: (1.755325507s)
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-rng6t" [a85117a0-7840-43e1-a5c5-a66902bb8a57] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-869c55b6dc-rng6t" [a85117a0-7840-43e1-a5c5-a66902bb8a57] Running
E0725 13:16:36.918228   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kindnet-20220725125922-44543/client.crt: no such file or directory
E0725 13:16:36.924762   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kindnet-20220725125922-44543/client.crt: no such file or directory
E0725 13:16:36.935030   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kindnet-20220725125922-44543/client.crt: no such file or directory
E0725 13:16:36.955131   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kindnet-20220725125922-44543/client.crt: no such file or directory
E0725 13:16:36.995281   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kindnet-20220725125922-44543/client.crt: no such file or directory
E0725 13:16:37.077584   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kindnet-20220725125922-44543/client.crt: no such file or directory
E0725 13:16:37.238147   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kindnet-20220725125922-44543/client.crt: no such file or directory
E0725 13:16:37.558332   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kindnet-20220725125922-44543/client.crt: no such file or directory
E0725 13:16:38.157497   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/functional-20220725122408-44543/client.crt: no such file or directory
E0725 13:16:38.198947   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kindnet-20220725125922-44543/client.crt: no such file or directory
E0725 13:16:39.479328   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kindnet-20220725125922-44543/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.009204794s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220725125922-44543 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kubenet-20220725125922-44543 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (92.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-20220725131741-44543 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.2
E0725 13:17:58.845245   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kindnet-20220725125922-44543/client.crt: no such file or directory
E0725 13:18:12.625967   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/cilium-20220725125923-44543/client.crt: no such file or directory
E0725 13:18:12.631546   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/cilium-20220725125923-44543/client.crt: no such file or directory
E0725 13:18:12.641761   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/cilium-20220725125923-44543/client.crt: no such file or directory
E0725 13:18:12.661872   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/cilium-20220725125923-44543/client.crt: no such file or directory
E0725 13:18:12.702248   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/cilium-20220725125923-44543/client.crt: no such file or directory
E0725 13:18:12.783469   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/cilium-20220725125923-44543/client.crt: no such file or directory
E0725 13:18:12.841856   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/auto-20220725125922-44543/client.crt: no such file or directory
E0725 13:18:12.943775   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/cilium-20220725125923-44543/client.crt: no such file or directory
E0725 13:18:13.263918   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/cilium-20220725125923-44543/client.crt: no such file or directory
E0725 13:18:13.904103   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/cilium-20220725125923-44543/client.crt: no such file or directory
E0725 13:18:15.185476   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/cilium-20220725125923-44543/client.crt: no such file or directory
E0725 13:18:17.745643   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/cilium-20220725125923-44543/client.crt: no such file or directory
E0725 13:18:22.867985   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/cilium-20220725125923-44543/client.crt: no such file or directory
E0725 13:18:33.108371   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/cilium-20220725125923-44543/client.crt: no such file or directory
E0725 13:18:53.593516   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/cilium-20220725125923-44543/client.crt: no such file or directory
E0725 13:18:56.011287   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/skaffold-20220725125803-44543/client.crt: no such file or directory
E0725 13:19:10.414002   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/calico-20220725125923-44543/client.crt: no such file or directory
E0725 13:19:10.419156   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/calico-20220725125923-44543/client.crt: no such file or directory
E0725 13:19:10.429484   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/calico-20220725125923-44543/client.crt: no such file or directory
E0725 13:19:10.449623   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/calico-20220725125923-44543/client.crt: no such file or directory
E0725 13:19:10.490638   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/calico-20220725125923-44543/client.crt: no such file or directory
E0725 13:19:10.571730   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/calico-20220725125923-44543/client.crt: no such file or directory
E0725 13:19:10.734098   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/calico-20220725125923-44543/client.crt: no such file or directory
E0725 13:19:11.054426   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/calico-20220725125923-44543/client.crt: no such file or directory
E0725 13:19:11.694899   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/calico-20220725125923-44543/client.crt: no such file or directory
E0725 13:19:12.975593   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/calico-20220725125923-44543/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-20220725131741-44543 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.2: (1m32.239087372s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (92.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-20220725131741-44543 create -f testdata/busybox.yaml
E0725 13:19:15.541873   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/calico-20220725125923-44543/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) Done: kubectl --context no-preload-20220725131741-44543 create -f testdata/busybox.yaml: (1.637704106s)
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [2ceaa16f-3c06-45e4-b516-043bc6a287e3] Pending
helpers_test.go:342: "busybox" [2ceaa16f-3c06-45e4-b516-043bc6a287e3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [2ceaa16f-3c06-45e4-b516-043bc6a287e3] Running
E0725 13:19:20.622151   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/false-20220725125922-44543/client.crt: no such file or directory
E0725 13:19:20.627396   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/false-20220725125922-44543/client.crt: no such file or directory
E0725 13:19:20.639095   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/false-20220725125922-44543/client.crt: no such file or directory
E0725 13:19:20.659331   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/false-20220725125922-44543/client.crt: no such file or directory
E0725 13:19:20.663597   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/calico-20220725125923-44543/client.crt: no such file or directory
E0725 13:19:20.700363   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/false-20220725125922-44543/client.crt: no such file or directory
E0725 13:19:20.781776   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/false-20220725125922-44543/client.crt: no such file or directory
E0725 13:19:20.786889   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kindnet-20220725125922-44543/client.crt: no such file or directory
E0725 13:19:20.943315   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/false-20220725125922-44543/client.crt: no such file or directory
E0725 13:19:21.263552   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/false-20220725125922-44543/client.crt: no such file or directory
E0725 13:19:21.904074   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/false-20220725125922-44543/client.crt: no such file or directory
E0725 13:19:23.185873   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/false-20220725125922-44543/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.018117983s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-20220725131741-44543 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.77s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-20220725131741-44543 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-20220725131741-44543 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.71s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-20220725131741-44543 --alsologtostderr -v=3
E0725 13:19:25.747756   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/false-20220725125922-44543/client.crt: no such file or directory
E0725 13:19:30.868851   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/false-20220725125922-44543/client.crt: no such file or directory
E0725 13:19:30.905864   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/calico-20220725125923-44543/client.crt: no such file or directory
E0725 13:19:34.571879   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/cilium-20220725125923-44543/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-20220725131741-44543 --alsologtostderr -v=3: (12.540716725s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220725131741-44543 -n no-preload-20220725131741-44543
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220725131741-44543 -n no-preload-20220725131741-44543: exit status 7 (117.328336ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-20220725131741-44543 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (300.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-20220725131741-44543 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.2
E0725 13:19:41.110124   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/false-20220725125922-44543/client.crt: no such file or directory
E0725 13:19:51.387984   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/calico-20220725125923-44543/client.crt: no such file or directory
E0725 13:20:01.591787   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/false-20220725125922-44543/client.crt: no such file or directory
E0725 13:20:19.081797   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/skaffold-20220725125803-44543/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-20220725131741-44543 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.2: (4m59.505287728s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220725131741-44543 -n no-preload-20220725131741-44543
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (300.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-20220725131610-44543 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-20220725131610-44543 --alsologtostderr -v=3: (1.624699512s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220725131610-44543 -n old-k8s-version-20220725131610-44543
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220725131610-44543 -n old-k8s-version-20220725131610-44543: exit status 7 (117.801101ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-20220725131610-44543 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (8.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-mhjvb" [b3d92a3d-a9e6-4310-865a-8f9cb6d82035] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0725 13:24:38.118895   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/calico-20220725125923-44543/client.crt: no such file or directory
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-mhjvb" [b3d92a3d-a9e6-4310-865a-8f9cb6d82035] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.013318465s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (8.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-mhjvb" [b3d92a3d-a9e6-4310-865a-8f9cb6d82035] Running
E0725 13:24:48.321166   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/false-20220725125922-44543/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009975415s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-20220725131741-44543 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Done: kubectl --context no-preload-20220725131741-44543 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (1.567919391s)
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p no-preload-20220725131741-44543 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (45.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-20220725132539-44543 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.24.2
E0725 13:25:55.078841   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/enable-default-cni-20220725125922-44543/client.crt: no such file or directory
E0725 13:25:57.062530   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/bridge-20220725125922-44543/client.crt: no such file or directory
E0725 13:26:21.253493   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/functional-20220725122408-44543/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-20220725132539-44543 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.24.2: (45.421179612s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (45.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.69s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-20220725132539-44543 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Done: kubectl --context embed-certs-20220725132539-44543 create -f testdata/busybox.yaml: (1.559585863s)
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [4388cb24-ddb5-4fee-89b0-054a7ee55844] Pending
helpers_test.go:342: "busybox" [4388cb24-ddb5-4fee-89b0-054a7ee55844] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [4388cb24-ddb5-4fee-89b0-054a7ee55844] Running
E0725 13:26:30.913407   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubenet-20220725125922-44543/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.014924425s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-20220725132539-44543 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-20220725132539-44543 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-20220725132539-44543 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.70s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-20220725132539-44543 --alsologtostderr -v=3
E0725 13:26:36.957863   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kindnet-20220725125922-44543/client.crt: no such file or directory
E0725 13:26:38.196920   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/functional-20220725122408-44543/client.crt: no such file or directory
E0725 13:26:44.466916   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/addons-20220725121918-44543/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-20220725132539-44543 --alsologtostderr -v=3: (12.502703631s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220725132539-44543 -n embed-certs-20220725132539-44543
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220725132539-44543 -n embed-certs-20220725132539-44543: exit status 7 (117.49282ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-20220725132539-44543 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (302.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-20220725132539-44543 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.24.2
E0725 13:26:58.601713   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kubenet-20220725125922-44543/client.crt: no such file or directory
E0725 13:28:12.666222   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/cilium-20220725125923-44543/client.crt: no such file or directory
E0725 13:28:56.043558   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/skaffold-20220725125803-44543/client.crt: no such file or directory
E0725 13:29:10.437608   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/calico-20220725125923-44543/client.crt: no such file or directory
E0725 13:29:15.645677   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/no-preload-20220725131741-44543/client.crt: no such file or directory
E0725 13:29:15.651113   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/no-preload-20220725131741-44543/client.crt: no such file or directory
E0725 13:29:15.663334   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/no-preload-20220725131741-44543/client.crt: no such file or directory
E0725 13:29:15.683988   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/no-preload-20220725131741-44543/client.crt: no such file or directory
E0725 13:29:15.724245   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/no-preload-20220725131741-44543/client.crt: no such file or directory
E0725 13:29:15.806262   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/no-preload-20220725131741-44543/client.crt: no such file or directory
E0725 13:29:15.966484   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/no-preload-20220725131741-44543/client.crt: no such file or directory
E0725 13:29:16.288748   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/no-preload-20220725131741-44543/client.crt: no such file or directory
E0725 13:29:16.928995   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/no-preload-20220725131741-44543/client.crt: no such file or directory
E0725 13:29:18.209254   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/no-preload-20220725131741-44543/client.crt: no such file or directory
E0725 13:29:20.641820   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/false-20220725125922-44543/client.crt: no such file or directory
E0725 13:29:20.769606   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/no-preload-20220725131741-44543/client.crt: no such file or directory
E0725 13:29:25.890154   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/no-preload-20220725131741-44543/client.crt: no such file or directory
E0725 13:29:36.130930   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/no-preload-20220725131741-44543/client.crt: no such file or directory
E0725 13:29:56.612427   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/no-preload-20220725131741-44543/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-20220725132539-44543 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.24.2: (5m2.464003861s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220725132539-44543 -n embed-certs-20220725132539-44543
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (302.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-mp7g8" [772f2347-bdb3-4e73-ad0c-92bdc89a2ef2] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0725 13:31:52.090196   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/auto-20220725125922-44543/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-mp7g8" [772f2347-bdb3-4e73-ad0c-92bdc89a2ef2] Running
E0725 13:31:59.497647   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/no-preload-20220725131741-44543/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.015628366s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (12.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-mp7g8" [772f2347-bdb3-4e73-ad0c-92bdc89a2ef2] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008582609s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-20220725132539-44543 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Done: kubectl --context embed-certs-20220725132539-44543 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (1.565241249s)
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p embed-certs-20220725132539-44543 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (41.78s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-different-port-20220725133258-44543 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.24.2
E0725 13:33:00.015712   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/kindnet-20220725125922-44543/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-different-port-20220725133258-44543 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.24.2: (41.776833078s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (41.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (10.77s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-different-port-20220725133258-44543 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Done: kubectl --context default-k8s-different-port-20220725133258-44543 create -f testdata/busybox.yaml: (1.633629908s)
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [91b74a16-21fa-478b-9a6b-ec8d5e143010] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
helpers_test.go:342: "busybox" [91b74a16-21fa-478b-9a6b-ec8d5e143010] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 9.016873011s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-different-port-20220725133258-44543 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (10.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-different-port-20220725133258-44543 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-different-port-20220725133258-44543 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (12.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-different-port-20220725133258-44543 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-different-port-20220725133258-44543 --alsologtostderr -v=3: (12.54024198s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (12.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220725133258-44543 -n default-k8s-different-port-20220725133258-44543
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220725133258-44543 -n default-k8s-different-port-20220725133258-44543: exit status 7 (117.649008ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-different-port-20220725133258-44543 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (298.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-different-port-20220725133258-44543 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.24.2

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-different-port-20220725133258-44543 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.24.2: (4m57.47416669s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220725133258-44543 -n default-k8s-different-port-20220725133258-44543
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (298.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (8.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-7tp4v" [03cb3fda-d35f-4d4f-824b-390bfded730d] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-7tp4v" [03cb3fda-d35f-4d4f-824b-390bfded730d] Running
E0725 13:39:10.473613   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/calico-20220725125923-44543/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.016480938s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (8.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (6.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-7tp4v" [03cb3fda-d35f-4d4f-824b-390bfded730d] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005593752s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-different-port-20220725133258-44543 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
E0725 13:39:15.679825   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/no-preload-20220725131741-44543/client.crt: no such file or directory
start_stop_delete_test.go:291: (dbg) Done: kubectl --context default-k8s-different-port-20220725133258-44543 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (1.547785765s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (6.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p default-k8s-different-port-20220725133258-44543 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (42.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-20220725134004-44543 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.24.2

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-20220725134004-44543 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.24.2: (42.790792253s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (42.79s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-20220725134004-44543 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (12.56s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-20220725134004-44543 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-20220725134004-44543 --alsologtostderr -v=3: (12.560729508s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (12.56s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220725134004-44543 -n newest-cni-20220725134004-44543
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220725134004-44543 -n newest-cni-20220725134004-44543: exit status 7 (122.771212ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-20220725134004-44543 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (18.63s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-20220725134004-44543 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.24.2

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-20220725134004-44543 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.24.2: (18.084390466s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220725134004-44543 -n newest-cni-20220725134004-44543
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (18.63s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.58s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p newest-cni-20220725134004-44543 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.58s)

                                                
                                    

Test skip (18/289)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.2/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.24.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.2/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.24.2/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:282: registry stabilized in 9.777263ms
addons_test.go:284: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-fw78q" [0cc99673-982b-4c04-bdb5-e20906c32f74] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:284: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.008931616s
addons_test.go:287: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-proxy-2kb2h" [0e890105-47de-4f80-8046-fdb9c5b2f05a] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:287: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.011475562s
addons_test.go:292: (dbg) Run:  kubectl --context addons-20220725121918-44543 delete po -l run=registry-test --now

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:292: (dbg) Done: kubectl --context addons-20220725121918-44543 delete po -l run=registry-test --now: (2.843206715s)
addons_test.go:297: (dbg) Run:  kubectl --context addons-20220725121918-44543 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:297: (dbg) Done: kubectl --context addons-20220725121918-44543 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.815325904s)
addons_test.go:307: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (17.69s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (10.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:164: (dbg) Run:  kubectl --context addons-20220725121918-44543 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:184: (dbg) Run:  kubectl --context addons-20220725121918-44543 replace --force -f testdata/nginx-ingress-v1.yaml

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:197: (dbg) Run:  kubectl --context addons-20220725121918-44543 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:202: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [1d9d53f8-1e6c-48b2-8169-90e7010c3b03] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [1d9d53f8-1e6c-48b2-8169-90e7010c3b03] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:202: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.010921466s
addons_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220725121918-44543 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:234: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (10.20s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:450: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20220725122408-44543 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1564: (dbg) Run:  kubectl --context functional-20220725122408-44543 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1569: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-578cdc45cb-jlnxx" [c3143eae-8ec4-4f0b-99be-649465dffaf3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:342: "hello-node-connect-578cdc45cb-jlnxx" [c3143eae-8ec4-4f0b-99be-649465dffaf3] Running
E0725 12:26:54.660714   44543 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-43375-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/addons-20220725121918-44543/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1569: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.015289531s
functional_test.go:1575: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (10.20s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:542: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-20220725125922-44543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p flannel-20220725125922-44543
--- SKIP: TestNetworkPlugins/group/flannel (0.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel (0.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "custom-flannel-20220725125922-44543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-flannel-20220725125922-44543
--- SKIP: TestNetworkPlugins/group/custom-flannel (0.60s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-20220725133257-44543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-20220725133257-44543
--- SKIP: TestStartStop/group/disable-driver-mounts (0.46s)

                                                
                                    
Copied to clipboard